Skip to content

Google Cloud Platform is a set of services that Google offer to customers providing different type of computing functionality, for example, Virtual Machines, Databases, Message Queues etc. These services are running on the same global infrastructure that Google uses for its products. In this article, I'll show how to create and connect to a Virtual Machine running on the Google Cloud.

To complete the actions in this article, you'll need a Google Cloud Platform account. These can be created at https://cloud.google.com. There is a charge for using the Google Cloud Platform, but when you sign up you get $300 off free credit. There are also several services that are always free as long as you are within the free usage tier. Information about GCP pricing can be found at https://cloud.google.com/pricing/

Creating a VM

The easiest way of creating a VM within GCP is from within the Web Console. Navigate to https://console.cloud.google.com/ to display the web console. Along the top of the page, you see the menu and the name of the project you have selected. When signing up, GCP creates an initial project for you. If you do not have one created, navigate to https://console.cloud.google.com/projectcreate to create a new project and then return to the console home page.

The screenshot above shows the GCP console together with the My First Project selected.

From within the console, press the menu button at the top left. A menu is displayed down the left hand side of the page showing all of the different options that can be managed from within the console.

For this article, we're interested in creating a new Virtual Machine. VM's are part of the GCP Compute Engine, so select the option Compute Engine from the menu.

On the Compute Engine page, a list of VM's is displayed that you have previously created. If this is the first VM you are creating, you will see the welcome page and be presented with the option of creating a new VM. Simply press the Create button to continue.

In the following page, we can enter details of the VM we want to create, e.g. we can name it, select the region to create it in, select the amount of RAM and disk space it has as well as all the other options that go to define a VM. For this article, since we're just creating a basic VM, we'll go through the common options that are needed to create a VM.

Name

This is the name of both the VM when it is running, and the name that is displayed in the console where you manage your instances. The name must be in lowercase but can include numbers or hyphens.

Region / Zone

This defines the geographic location of where the VM is created. This defaults to us-east1 which is in South Carolina. For the purposes of this article, we will leave the default region as us-east-1 and the default Zone as us-east1-b

Machine Type

This option allows us to configure the number of cores, memory and GPUs that the VM will possess. Note that GPUs are not presently available in all regions, so if you select a region other than us-east-1, the options may be different from those shown here.

Selecting the CPU drop down, allows the number of CPUs used by the VM to be defined. Obviously, the more CPUs you select, the higher the cost of the VM per month. The estimated cost of the VM is always displayed on the right hand side of the page whilst customising the VM.

For this article, select the Machine Type as Micro. This defaults the VM to 1 shared CPU with 0.6Gb RAM and no GPU. This is the smallest of the VMs that can be created, and this VM type can be run in the free-tier. For more information about the Free Tier, and what you can run in it, see https://cloud.google.com/free/docs/gcp-free-tier?hl=en_GB&_ga=2.180966501.-319154563.1548685428#always-free

Boot Disk

Next, we get to define the Boot disk and operating system for the VM.

Within this section, we can select the OS, the size of the boot disk and a number of different image types. The default option is to use Debian Linux 9 (stretch) with a 10 GB boot disk. This is fine for demonstration purposes, but can be changed to whatever is best for your use case. A variety of different Linux and Windows operating systems can be selected from here if required.

Identity and api access

The Identity and api access section allows us to define what account, and therefore what security, the applications running on the VM will use. For this article, we aren't running any applications on the VM, so we can leave the default settings.

Firewall

The next section allows us to define whether HTTP and HTTPS traffic are allowed to the instance. Since we're not deploying a web server, we can leave these settings unchecked.

ssh access

Once our VM is running, we'll want to connect to it via SSH so that we can see that everything is there as expected. To be able to do this, we need to add our SSH public key onto the Virtual Machine. Your public key is usually stored within the ~/.ssh/id_rsa.pub file. If you don't have a public / private key, now would be a good time to create one 🙂

Take a copy of your public key and paste it into the keydata box on the Security tab. If you're on a Mac, you can copy your public key into the clipboard with

cat ~/.ssh/id_rsa.pub | pbcopy

Starting the VM

That's all that's needed to get a basic VM up and running. Of course, there are many other options available which I'll go through in another article. For the moment though, let's start the VM by clicking the Create button.

Upon pressing this button, GCP will start to provision the VM. This will take up to a minute, however when it's complete, you will see a list of VM's that you've created along with tools to manage the VMs.

Connecting to the VM

In the list of VMs, you can see the name (webserver1 in this example) along with its internal and external ip addresses. To connect to the VM, we need to ssh using the username we defined previously and the public IP address.

From the terminal:

david$ ssh david@104.196.139.230
The authenticity of host '104.196.139.230 (104.196.139.230)' can't be established.
ECDSA key fingerprint is SHA256:cxuubyAwHqGG6SDyGdHHlVtPLnQsYJNpT57E2mok4Dg.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '104.196.139.230' (ECDSA) to the list of known hosts.
Linux webserver1 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
david@webserver1:~$ 

Destroying the VM

That's all there is to it. We've created a Debian VM, and connected to it. We can now use the VM for whatever we need.

Finally, to delete the VM (and ensure there are no recurring costs), select the VM in the list of instances and press the Trash Can Icon. Remember, VMs are charged by usage, so if you're not using it, delete it to stop recurring costs.

When developing any application, it’s quite common to have to access multiple databases. Out of the box, Spring Boot provides easy access to a single datasource, in the simplest case just by specifying the JDBC driver on the class path!

Accessing multiple databases however, is still straightforward with Spring Boot. This article shows how to connect to two different MySql datasources from a Spring Boot application.

To showcase how to connect to to different databases, consider a products database and a customer database, with the following simplistic schema and data.

Database One - Products Database

Schema - create table PRODUCT(id integer, name varchar(255));
Data - insert into PRODUCT(id, name) values (1, ‘XBox');

Database Two - Customer Database

Schema - create table CUSTOMER(`id` integer, `name` varchar(255));
Data - insert into CUSTOMER(id, name) values (1, 'Daphne Jefferson’);

To access the databases, we need to declare a JdbcTemplate for each database. In Spring, JdbcTemplates are created from a `DataSource` which has a set of connection properties (url, username, password etc.)

@Configuration
public class DataSourceConfig {

  Bean
  @Qualifier("customerDataSource")
  @Primary
  @ConfigurationProperties(prefix="customer.datasource")
  DataSource customerDataSource() {
    return DataSourceBuilder.create().build();
  }

  @Bean
  @Qualifier("productDataSource")
  @ConfigurationProperties(prefix="product.datasource")
  DataSource productDataSource() {
    return DataSourceBuilder.create().build();
  }

  @Bean
  @Qualifier("customerJdbcTemplate")
  JdbcTemplate customerJdbcTemplate(@Qualifier("customerDataSource")DataSource 
customerDataSource) {
    return new JdbcTemplate(customerDataSource);
  }

  @Bean
  @Qualifier("productJdbcTemplate")
  JdbcTemplate productJdbcTemplate(@Qualifier("productDataSource")DataSource productDataSource) {
    return new JdbcTemplate(productDataSource);
  }
}

In the above code we can see that a @Configuration bean has been declared that defines a customerDatasource and a customerJdbcTemplate. Each of these beans are annotated with the @Qualifier('customer...') to identify them as relating to the customer database.

Similarly, the above code defines a productDataSource and a productJdbcTemplate. Again these are annotated with @Qualifier('product...') to identify them as relating to the product database.

Finally, each DataSource Bean is annotated with the @ConfigurationProperties(prefix="...datasource") annotation. This tells Spring Boot what properties within the application.properties file should be used for connecting to each database. The application.properties file therefore looks like the following:

product.datasource.url = jdbc:mysql://localhost:3306/dbOne
product.datasource.username = user1
product.datasource.password = password
product.datasource.driverClassName = com.mysql.jdbc.Driver

customer.datasource.url = jdbc:mysql://localhost:3306/dbTwo
customer.datasource.username = user2
customer.datasource.password = password
customer.datasource.driverClassName = com.mysql.jdbc.Driver

Now that we've seen how to create a DataSource and JdbcTemplate, the JdbcTemplate can be injected into a @Repository for use, e.g.

@Repository
public class CustomerRepository {

  private static final String SELECT_SQL = "select NAME from CUSTOMER where ID=?";

  @Autowired
  @Qualifier("customerJdbcTemplate")
  JdbcTemplate customerJdbcTemplate;
  public String getCustomerName(int id) {
    String name = customerJdbcTemplate.queryForObject(SELECT_SQL, new Object[] {id}, String.class);

    return name;
  }
}

Again, note the use of the @Qualifier annotation to specify which JdbcTemplate is required for the different repositories.

The ProductRepository is similarly written to access the productJdbcTemplate

@Repository
public class ProductRepository {

  private static final String SELECT_SQL = "select NAME from PRODUCT where ID=?";

  @Autowired
  @Qualifier("productJdbcTemplate")
  JdbcTemplate productJdbcTemplate;
  public String getProductName(int id) {
    String name = productJdbcTemplate.queryForObject(SELECT_SQL, new Object[] {id}, String.class);

    return name;
  }
}

With a few simple steps, Spring Boot allows us to easily connect to multiple databases when using JdbcTemplates.

The Eclipse MicroProfile version 1.1 has been released and builds upon the version 1.0 release by adding support for the Configuration API.

With the inclusion of the new Configuration API, Eclipse Microprofile, whose tag line is "Optimizing Enterprise Java for a microservices architecture", now supports the following APIs

  1. Configuration 1.0
  2. CDI 1.2
  3. JSON-P 1.0
  4. JAX-RS 2.0.1

The full specification for this release can be downloaded from here.

There is a vast amount of Developer Resources available around Eclipse MicroProfile including sample code and online resources available on the project's site. This is worth reading for those new to MicroProfile, as is the MicroProfile site at http://microprofile.io

The WildFly team have announced that WildFly 11 Beta 1 is now available. This release is now feature complete.

The key highlights of WildFly 11 Beta 1 are:

  • New Security Infrastructure - Elytron
  • Simplification of JNDI and EJB invocation
  • HTTP/2 support
  • Out of the box load balancer configuration

Of these changes, the most significant is the Elytron security system.

Elytron offers a centralised security framework that can be used both by applications deployed to the application server and by the application server itself, thus providing a consistent approach to security for WildFly 11 users. Elytron covers both authentication and authorization.

WildFly 11 Beta 1 can be downloaded directly from http://wildfly.org/downloads/

What are your thoughts on this new Beta? How does it compare to previous version of WildFly that you've used? Get involved in the community and leave your thoughts below.

Running Payara Micro services on Heroku is incredibly straightforward.

There are 2 basic ways of running a Payara Micro service on Heroku:

  1. Create a Fat Jar
  2. Deploy a .War file along with PayaraMicro.jar

I prefer to deploy applications as .War files rather than Fat Jars, so in this example, I'll show how to create a Heroku application and deploy a Payara Micro service to it.

Enabling a Payara Micro Application For Heroku

To enable a Payara Micro application for running on Heroku, there are two simple steps we have to make. The first is to simply bundle the Payara Micro distribution file within the Maven project. (You could create a standalone app and completely use Maven, but I think this is a simpler solution). Download Payara Micro and place it in the \lib folder of the Maven project.

.
|___.gitignore
|___lib
| |___payara-micro-4.1.1.164.jar
|___pom.xml
|___Procfile
|___src
| |___main
| | |___java

The reason for deploying Payara Micro is simply that we need this to launch any service .War files. *Remember, we could use a Fat Jar, if that's your preference.*

Secondly, we need to create a file telling Heroku how to run the service that we are to deploy. This file, Procfile needs to be created at the root of the Maven project structure as shown above. The contents of this file tell Heroku how to start Payara Micro and deploy the application:

web:    java -jar lib/payara-micro-4.1.1.164.jar 
--deploy target/PayaraHeroku.war --port $PORT

There are a few things to say about the Procfile. You can see that this defines a web application that is invoked by java -jar lib/payara-micro-4.1.1.164.jar. There are then a couple of Payara Micro command line options. --deploy tells Payara Micro to deploy the result of the Maven build (target/PayaraHeroku.war in this example). --port $PORT tells Payara Micro to listen on the correct port that Heroku is using

So that's it, there are only 2 small changes needed to a Payara Micro service to enable it to run on Heroku. We need to add the Payara Micro runtime and create a Procfile.

Deploying a Heroku Application

Once you've created a Payara Micro application, it needs to be deployed on Heroku. Deploying a Heroku application is documented in depth on the Heroku Dev Centre. You'll find everything you need to know about deploying Java applications there, so I'll only go over this briefly.

The basic steps are to:

  1. Create a Heroku Application heroku create
  2. Commit your Payara Micro service git add ... && git commit ...
  3. Push the git repository to Heroku git push heroku master
  4. Scale the application accordingly heroku ps:scale web=1

Once the git repository is pushed to Heroku, Heroku will work out that this is a Java application and will start the Maven build. This will create the artefact in the target directory on Heroku which is referenced by the Procfile. Once the build is complete, Heroku will start the Payara Micro service using the options defined within the Procfile.

And that's it! As you can see, there's very little you need to do to be able to get your Payara Micro services running on Heroku.