Path environment variable at IntelliJ on macOS

intellij-ideaLately I encountered a problem with IntelliJ on macOS I wasn’t aware of so far. For a new project I joined it was necessary to let Java execute a shell script which resides in /usr/local/bin and calls another script. Putting all the questions for the reason behind this aside I was pretty puzzled when I couldn’t get this to work while on all my colleagues Linux machines it worked like a charm.
After fiddling around with it for a while it became very much clear that the JVM which I started from IntelliJ didn’t had /usr/local/bin in its PATH environment variable and because of that couldn’t execute the script. The question was why this was the case. When I started the application via gradle on the shell everything worked correct which seemed to point to an IntelliJ problem. After some research it became clear that the root cause of the problem is a concept in macOS which separates the shell environment variables from the environment variables for GUI processes, or better said the environment variables for processes which are started from Spotlight, Finder or the desktop.
The standard environment variables set for GUI applications on macOS are controlled via launchctl and are per default set to /usr/bin:/bin:/usr/sbin:/sbin.

The quick solution to get around this and use the environment variables set in the Shell via .bashrc or .zshrc etc. is to start IntelliJ (or any other UI application) via the command line. This can be done on with the command

After that IntelliJ runs in the shell context and has access to all the variables set there. If this doesn’t suit you and you want a more permanent solution which makes it possible to start IntelliJ without the shell and still have access to the environment variables there is a solution for that, too.
You have to create the file ~/Library/LaunchAgents/environment.plist. A plist file in macOS is a property list file which is a widely used concept in macOS (read more about it here). In this file there can be some startup configurations stored. A minimal configuration to just control the environment variables could look like this:

Read More »

Posted in Apple, coding, english, Java, macOS | Tagged , , , , , | Leave a comment

Automation via shell aliases

terminal-appWhen working on software development projects there are many repetitive tasks to do, may it be the deployment of a binary, starting of different servers in docker containers or standalone, the exchange of config files for different environments, the migration of a database or something simple as the navigation to deep paths on the command line to do some editings or server starts there. All this tasks can be annoying and over time they add up to a significant amount of time which is not very productive. The logical counter measure for a developer should be to automate as much as possible of this recurring tasks. For example you can bundle up the start of different docker containers via docker compose or delegate some tasks to a CI server but in the end some tasks will remain which you have to trigger manually -at least on your development machine- which is where your shell can become handy with a feature called ‘alias’. An alias is an automation feature which at least every popular Linux/Unix/macOS shell provides. With an alias you can define a new command that executes a series of shell commands completely automatic.
I have a rule of thumb defined where I try to automate every manual task by an alias which I have to do at a minimum of three times a week (to be honest I also do an alias if I have to do a task two times every week just because it is such a handy feature).
To define a new alias all you have to do is to add it to the config file of your shell. In the case of the very popular bash shell this file is called .bashrc while for zsh it is called .zshrc but in both cases the file sits in the users home folder. All you have to do is to open your rc file with an text editor, scroll to the end and add a new line starting with the keyword ‘alias’ followed by the alias name you chose and the command that should be executed. A sample command could look like this:

Read More »

Posted in english, howto, Linux, macOS, Tech | Tagged , , , , , | Leave a comment

Integrating Flyway with Java EE and using its datasource

Flyway is my favourite tool when it comes to database migrations because its convention over configuration approach makes it fairly easy to use while bootstrapping and configuration is reduced to a minimum. For those of you who don’t know anything about database migration tools (evolutionary database design) in short the concept is to track SQL scripts with your source code which reflect the actual version of your database that matches the code version to run properly. This could be done without any framework but what tools like Flyway or liquibase bring to the table is the ability to check automatically at build- or start-time if the database is at the latest available version and if not run all available SQL scripts from the actual database version to the newest one available. Flyway can be used with all the major SQL dialects, many different relational databases and has plugins for a huge variety of frameworks to be integrated into. For more detailed information visit the projects website.

As I’m a heavy Java EE user I ran into the problem that Java EE has no special support by Flyway. My problems with this are that there is no build in way to get the actual JPA datasource (defined in the persistence.xml) and there is no solution provided that wires Flyway in the startup process to run the migration scripts at startup. Flyway provides integration plugins with this functionalities for other frameworks like Spring Boot but fortunately it is not that complicated to realise the same thing in Java EE.

In many scenarios it is no problem to trigger Flyway in the right moment as it is possible to integrate it in the maven build process or start it manually via the command line but both solutions doesn’t fit my needs. I want to have Flyway check my database at startup as the production server is not the server where the build process happens and I want to avoid a situation where I have to execute a shell command manually after deploying a new version. For this cases Flyway can be started from the source code as well. As it has to be run before the application starts and tries using the database you have to make sure that the Flyway scripts are run before that. I found two solutions to make this work. The first one uses Hibernates “Service Provider Interface” (SPI) -I use WildFly in this example which comes with Hibernate- to register a new integrator and grab the datasource via a little bit of reflection code while the second approach (which is my favourited solution) defines a new singleton bean which uses @resource injection to get the datasource.

Hibernate SPI integrator solution

Read More »

Posted in coding, english, howto, Java, Tech | Tagged , , , , , , | 3 Comments

Dynamic switching dev and prod datasource with maven profiles

maven logo
Most of the time when you develop an application that uses a database you are likely to use another database for your local development work than what you will use later in production. The main reason for this is that there are databases like H2 which by design are fitting the development situation while they are not very well suited for production usage. H2 i.e. has very fast startup times, can run completely in memory -this is an advantage because every time it is restarted its state is reset, too-, uses only few system resources and has a good mySQL compatibility. On the other hand it is not that popular in production as it lacks many functionalities like custom functions, an SQL interpreter etc.. Following this it is a very common scenario to use mySQL in production while using H2 for development. This way the developer can use the same database dialect for development while the local installation of a mySQL DB is not needed.
The problem with this scenario is that you need to switch the datasource to production when you build a new release and switch back to development when you want to use your local H2 for testing. This is something which can easily be forgotten and become annoying over time. Therefore it is a step which could and should be automated.
When you are using JavaEE you define your datasource depending on the application server either via an interactive tool (i.e. glassfish or payara via the asadmin command) or in a xml file (i.e. wildfly/jboss standalone.xml or TomEE context.xml). As an example I show you here a valid datasource definition in the wildly standalone.xml

Read More »

Posted in coding, english, Java | Tagged , , , , , , , , , | Comments closed

Apache Shiro part 2 – securing a JSF Java EE 7 application

apache shiro logo

In the first part I described why I chose Apache Shiro as an Authentication framework. In this part I will describe the simplest working solution to secure a Java EE7 application with JSF/Primefaces frontend. To be honest I wont use much of Primefaces in this sample but the application for which I did this research uses it so I added the dependency here and added an Primefaces component to this demo project.

Bootstrapping

I created an empty Java EE project via maven and added the needed Shiro dependencies as well as the Primefaces dependency to

the pom file.

Read More »

Posted in coding, english, Java, Security | Tagged , , , , , , , , , | 2 Comments