Checking IMEI Number Using Java Programs

IMEI number (International Mobile Equipment Identity) is a unique number used to identify mobile phones. IMEI is used by mobile networks to uniquely identify a device and hence the IMEI number is used to block stolen devices.

IMEI number is composed of 15 decimal digits including a check digit. The last digit in the IMEI number is a check digit computed using Luhn algorithm. The check digit is used to protect against data entry errors in IMEI numbers when entered in devices.

Following algorithm is used to find the check digit in an IMEI number.

  • Given the 14 digits of IMEI number, start from the right.
  • Find the double of every other digit.
  • Find the sum of all digits in the number using doubled numbers for every other digit as computed above.
  • Multiply the sum by 9 and then divide by 10. The remainder is the check digit.

Here is an example illustrating computation of IMEI check digit,

  • 14 digits of an IMEI number = 42015420343761
  • Sum digits (double of every other digit from right) =  4+4+0+2+5+8+2+0+3+8+3+14+6+2 = 4+4+0+2+5+8+2+0+3+8+3+5+6+2 = 52.
  • Multiply with 9 = 52 * 9 = 468
  • Divide by 10 and find the remainder = 468 % 10 = 8
  • The check digit is 8. This is the number which when added with 52 makes it divisible by 10.
  • The full IMEI number is 420154203437618.

Problem: Find the check digit from first 14 digits of an IMEI number using Java

The following Java program computes the last check digit of an IMEI number given the first 14 digits. This program can be used to compute and verify IMEI numbers.

import java.util.Scanner;

// IMEI Java check digit generator example source code
public class IMEICheckDigitGenerator {

    public static void main(String[] args) {
        Scanner s = new Scanner(System.in);
        System.out.print("Please enter first 14 digits of IMEI number: ");
        String input = s.nextLine();
        
        int checkDigit = getCheckDigit(input);
        
        System.out.println("Check digit: "+checkDigit);
        System.out.println("IMEI Number: "+input+checkDigit);
        s.close();
    }
    
    // Returns check digit for 14 digit IMEI prefix
    public static int getCheckDigit(String imeiPrefix) {
        int sum = 0;
        for(int i = 13;i>=0;i=i-1) {
            String sDigit = imeiPrefix.substring(i,i+1);
            int digit = Integer.valueOf(sDigit);
            if(i%2==0) {
                sum = sum + digit;
            }else {
                sum = sum + sumOfDigits(digit*2);
            }
        }
        sum = sum * 9;
        return sum%10; // Return check digit        
    }
    
    // Calculate sum of digits for a number
    public static int sumOfDigits(int number) {
        int sum=0;
        while(number > 0) {
            sum += number%10;
            number = number/10;
        }
        return sum;
    }
}

Following is a sample output of the above program,

Please enter first 14 digits of IMEI number: 42015420343761
Check digit: 8
IMEI Number: 420154203437618

Problem: Check whether the given 15 digit number is a valid IMEI number using Java

The following java program verifies whether a given IMEI number is valid or not. It uses the last check digit for validation.

import java.util.Scanner;

// Java program to check whether an IMEI number is valid
public class IMEIValidatorInJava {

    public static void main(String[] args) {
        Scanner s = new Scanner(System.in);
        System.out.print("Please enter a 15 digit IMEI number: ");
        String input = s.nextLine();
        
        int computedCheckDigit = getCheckDigit(input.substring(0,14));
        int checkDigitInSource = Integer.valueOf(input.substring(14));
        
        if(computedCheckDigit == checkDigitInSource) {
            System.out.println(input+" is a valid IMEI number!");
        }else {
            System.out.println(input+" is NOT a valid IMEI number!");
            System.out.println("Check digit computed: "+computedCheckDigit);
        }
        
        s.close();
    }
    
    // Returns check digit for 14 digit IMEI prefix
    public static int getCheckDigit(String imeiPrefix) {
        int sum = 0;
        for(int i = 13;i>=0;i=i-1) {
            String sDigit = imeiPrefix.substring(i,i+1);
            int digit = Integer.valueOf(sDigit);
            if(i%2==0) {
                sum = sum + digit;
            }else {
                sum = sum + sumOfDigits(digit*2);
            }
        }
        sum = sum * 9;
        return sum%10; // Return check digit        
    }
    
    // Calculate sum of digits for a number
    public static int sumOfDigits(int number) {
        int sum=0;
        while(number > 0) {
            sum += number%10;
            number = number/10;
        }
        return sum;
    }
}

Following is a sample output from the program,

Please enter a 15 digit IMEI number: 914859533683732
914859533683732 is NOT a valid IMEI number!
Check digit computed: 0

How to Calculate CRC32 Checksum in Java

Cyclic Redundancy Check (CRC) is an error detection technique commonly used to detect any changes to raw data. CRC checksum is a short fixed length data derived from a larger block of data. If there is a change in original raw data, the computed CRC checksum will differ from the CRC checksum received from the source. This technique is used to detect data errors when a file is read from a storage system. Each file stored in the file system also has the checksum stored along with the content. If the checksum is different when calculated on the file content, we know that the file on the disk is corrupted.

There are differences between CRC checksums and common hash functions such as MD5 and SHA1. CRC checksums are simpler and faster to compute. However they are not cryptographically secure. Hence CRC checksums are used in data error detection while hash functions are used in encryption algorithms.

CRC32 algorithm returns a 32-bit checksum value from the input data. It is very easy to calculate CRC32 checksum of a given string in Java. The following example program generates CRC32 checksum using the built-in class java.util.zip.CRC32.

Java Source Code for CRC32 Checksum Calculation

import java.util.zip.CRC32;

// Calculates CRC32 checksum for a string
public class CRC32Generator {

    public static void main(String[] args) {
        String input = "Hello World!";
        CRC32 crc = new CRC32();
        crc.update(input.getBytes());
        System.out.println("input:"+input);
        System.out.println("CRC32:"+crc.getValue());
    }
}

Here is the output of the program,

input:Hello World!
CRC32:472456355

How to Check Pronic Number in Java

A pronic number is a product of two consecutive integers. Pronic numbers can be expressed in the form n(n+1). For example, 6 is a pronic number since it is the product of two consecutive integers 2 and 3.  These numbers are also known as oblong numbers or heteromecic numbers. Following are the first 10 pronic numbers,

0, 2, 6, 12, 20, 30, 42, 56, 72, 90.

Pronic numbers have some interesting properties. All pronic numbers are even integers and the n-th pronic number is the some of the first n even integers.

Problem: Write a java program to check whether a given number is a pronic number

We know that a pronic number is of the form n(n+1). Hence if we find the square root of a pronic number and then round it to the lower integer, we will get n. If we multiply it with the next integer and if the result is same as the original number we know that the given number is a pronic number! The following Java program uses the above logic to check whether a given number is pronic or not.

import java.util.Scanner;

// Java pronic number checker example
public class PronicNumberChecker {

    // Check whether a given number is pronic or not
    public static void main(String[] args) {
        Scanner s = new Scanner(System.in);
        System.out.print("Please enter a number: ");
        long number = s.nextLong();
        
        long n = (long)Math.sqrt(number);
        if(n*(n+1)==number) {
            System.out.println(number+" is a pronic number");
        }else {
            System.out.println(number+" is NOT a pronic number");
        }
        s.close();
    }
}

Here is a sample output of the program,

Please enter a number: 30
30 is a pronic number

Problem: Write a java program to generate first n pronic numbers

Generating first n pronic numbers is trivial. We iterate from 0 till n-1 and compute the product of the current number with the next number. The following Java program generates the first n pronic numbers.

import java.util.Scanner;

// Java example program to generate pronic number sequence
public class PronicNumberGenerator {

    public static void main(String[] args) {
        Scanner s = new Scanner(System.in);
        System.out.print("How many pronic numbers you need? ");
        long n = s.nextLong();
        
        // Print first n pronic numbers
        for(long i =0;i<n;i++) {
            System.out.print(i*(i+1));
            if(i!=n-1) {
                System.out.print(",");
            }
        }
        
        s.close();
    }
}

Here is the sample output of the program,

How many pronic numbers you need? 10
0,2,6,12,20,30,42,56,72,90

How to Write a MapReduce Program in Java

This tutorial provides a step by step tutorial on writing your first hadoop mapreduce program in java. This tutorial uses gradle build system for the mapreduce java project. This program requires a running hadoop installation.

Quick Introduction to MapReduce

MapReduce is a programming framework which enables processing of very large sets of data using a cluster of commodity hardware. It works by distributing the processing logic across a large number machines each of which will apply the logic locally to a subset of the data. The final result is consolidated and written to the distributed file system.

There are mainly 2 components of a mapreduce program. The mapper and the reducer. The mapper operates on the data to produce a set of intermediate key/value pairs. This data is then fed to a reducer with the values grouped on the basis of the key. The reducer computes the final result operating on the grouped values.

Problem Statement for the MapReduce Program

Problem Statement: Using mapreduce framework, find the frequency of characters in a very large file (running into a few terabytes!). The output consists of two columns – The ASCII character and the number of occurrences of the character in the input file.

We solve this problem using three classes – mapper, reducer and the driver. The driver is the entry point for the mapreduce program. Hadoop mapreduce will use the configured mapper and reducer to compute the desired output.

Prerequisites for Java MapReduce Program

  • Java 1.8 or above
  • Gradle 3.x or above

Creating the MapReduce Java Project in Gradle

Run the following command on console to create a simple Java project in gradle. Ensure that gradle and java is already installed on the system.

gradle init --type java-application

This creates an initial set of files for the Java gradle project. Delete App.java and AppTest.java from the new project (contained in src/main/java and src/test/java folders).

Configuring the MapReduce Gradle Build

Replace the build.gradle in the project with the following,

apply plugin: 'java-library'
apply plugin: 'application'

mainClassName = "AlphaCounter"

jar {
    manifest { attributes 'Main-Class': "$mainClassName" }
}
repositories { jcenter() }

dependencies { compile 'org.apache.hadoop:hadoop-client:2.7.3' }

Writing the Mapper Class

Copy the following class to the src/main/java folder. This is the mapper class for our mapreduce program. The mapreduce framework will pass each line of data as the value variable to the map function. Our program will convert it into a key/value pair where each character becomes a key and the value is set as 1.

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

// A mapper class converting each line of input into a key/value pair
// Each character is turned to a key with value as 1
public class AlphaMapper extends Mapper<Object, Text, Text, LongWritable> {
    private final static LongWritable one = new LongWritable(1);
    private Text character = new Text();

    @Override
    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
        String v = value.toString();
        for (int i = 0; i < v.length(); i++) {
            character.set(v.substring(i, i + 1));
            context.write(character, one);
        }
    }
}

Writing the Reducer Class

Now copy the following reducer function to src/main/java folder. The mapreduce program will collect all the values for a specific key (a character and its occurrence count in our example) and pass it to the reduce function. Our function computes the total number of occurrences by adding up all the values.

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

// Calculate occurrences of a character
public class AlphaReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
    private LongWritable result = new LongWritable();

    public void reduce(Text key, Iterable<LongWritable> values, Context context)
            throws IOException, InterruptedException {
        long sum = 0;
        for (LongWritable val : values) {
            sum += val.get();
        }
        result.set(sum);
        context.write(key, result);
    }
}

Writing the MapReduce Entry point Class

Finally copy the main entry point class for our mapreduce program. This sets up the mapreduce job including the name of mapper and reducer classes.

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

// The driver program for mapreduce job.
public class AlphaCounter extends Configured implements Tool {

    public static void main(String[] args) throws Exception {
        int res = ToolRunner.run(new Configuration(), new AlphaCounter(), args);
        System.exit(res);
    }

    @Override
    public int run(String[] args) throws Exception {

        Configuration conf = this.getConf();

        // Create job
        Job job = Job.getInstance(conf, "Alpha Counter Job");
        job.setJarByClass(AlphaCounter.class);

        job.setMapperClass(AlphaMapper.class);
        job.setReducerClass(AlphaReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        job.setInputFormatClass(TextInputFormat.class);

        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        job.setOutputFormatClass(TextOutputFormat.class);

        return job.waitForCompletion(true) ? 0 : 1;
    }
}

Running the Java MapReduce Program

Run the following command from the project folder to create a jar file for our project,

gradle jar

Copy the jar created to the hadoop home folder. Open a command window and navigate to the hadoop home folder.

First create a simple text file with the content "Hello World" and save it as input.txt. Upload the file to HDFS file system using the following command. This will copy the file to hdfs home folder.

bin/hdfs dfs -put input.txt .

Finally run the mapreduce program from the command line,

bin/hadoop jar mapreducedemo.jar ./input.txt output

Viewing the MapReduce Output

Run the following command to view the output of the mapreduce program,

bin/hdfs dfs -cat output/*

The console output consists of every character in "Hello World" and the number of occurrences of each character as shown below.

     1
H    1
W    1
d    1
e    1
l    3
o    2
r    1

Using a Reducer Program as Combiner

It is possible in mapreduce to configure the reducer as a combiner. A combiner is run locally immediately after execution of the mapper function. Since it is run locally, it substantially improves the performance of the mapreduce program and reduces the data items to be processed in the final reducer stage. Note that combiner can only be used in functions which are commutative and associative.

Add the following line to AlphaCounter.java to configure the reducer as the combiner,

job.setCombinerClass(AlphaReducer.class);

History of Spring Framework and Spring Boot

Introduction

Spring framework is arguably one of the most popular application development frameworks used by java developers. It currently consists of a large number of modules providing a range of services. These include a component container, aspect oriented programming support for building cross cutting concerns, security framework, data access framework, web application framework and support classes for testing components. All the components of the spring framework are glued together by the dependency injection architecture pattern. Dependency injection(also known as inversion of control) makes it easy to design and test loosely coupled software components. The current version of spring framework is 4.3.x and the next major version 5.0 is scheduled for release in the fourth quarter of 2017.

Over years spring framework has grown substantially. Almost all infrastructural software components required by a java enterprise application is now available in spring framework. However collecting all the required spring components together and configuring them in a new application requires some effort. This involves setting up library dependencies in gradle/maven and then configuring the required spring beans using xml, annotations or java code. Spring developers soon realized that it is possible to automate much of this work. Enter spring boot!

Spring boot takes an opinionated view of building spring applications. What this means is that for each of the major use cases of spring, spring boot defines a set of default component dependencies and automatic configuration of components. Spring boot achieves this using a set of starter projects. Want to be build a spring web application? Just add the dependency on spring-boot-starter-web! Want to use spring email libraries? Just add the dependency on spring-boot-starter-mail! Spring boot also has some cool features such as embedded application server(jetty/tomcat),  a command line interface based on groovy and health/metrics monitoring.

Spring boot enables java developers to quickly start a new project with all the required spring framework components. This article looks at how spring framework and spring boot has become the de facto leader in java based microservice application development from its humble beginnings in 2002. If you don’t have time, you may want to check out the spring timeline infographic.

History of Spring Framework

The Beginnings

In October 2002, Rod Johnson wrote a book titled Expert One-on-One J2EE Design and Development. Published by Wrox, this book covered the state of Java enterprise application development at the time and pointed out a number of major deficiencies with Java EE and EJB component framework. In the book he proposed a simpler solution based on ordinary java classes (POJO – plain old java objects) and dependency injection. Following is an excerpt from the book,

The centralization of workflow logic into the abstract superclass is an example of inversion of control. Unlike in traditional class libraries, where user code invokes library code, in this approach framework code in the superclass invokes user code. It’s also known as the Hollywood principle: "Don’t call me, I’ll call you". Inversion of control is fundamental to frameworks, which tend to use the Template Method pattern heavily(we’ll discuss frameworks later).

In the book, he showed how a high quality, scalable online seat reservation application can be built without using EJB. For building the application, he wrote over 30,000 lines of infrastructure code! It included a number of reusable java interfaces and classes such as ApplicationContext and BeanFactory. Since java interfaces were the basic building blocks of dependency injection, he named the root package of the classes as com.interface21. As Rod himself explained later, 21 in the name is a reference to 21st century!

One-on-One J2EE Design and Development was an instant hit. Much of the infrastructure code freely provided as part of the book was highly reusable and soon a number of developers started using it in their projects. Wrox had a webpage for the book with source code and errata. They also provided an online forum for the book. Interestingly even after 15 years, this book and its principles are still relevant in building high quality java web applications. I highly recommend that you get a copy for your collection!

Spring is Born
Shortly after the release of the book, developers Juergen Hoeller and Yann Caroff persuaded Rod Johnson to create an open source project based on the infrastructure code. Rod, Juergen and Yann started collaborating on the project around February 2003. It was Yann who coined the name "spring" for the new framework. According to Rod, spring represented a fresh start after the "winter" of traditional J2EE! Here is an excerpt from Yann Caroff’s review of Rod’s book in January 2003!,

Rod Johnson’s book covers the world of J2EE best practices in an amazingly exhaustive, informative and pragmatic way. From coding standards, idioms, through a fair criticism of entity beans, unit testing, design decisions, persistence, caching, EJBs, model-2 presentation tier, views, validation techniques, to performance, the reader takes a trip to the wonderland of project development reality, constraints, risk and again, best practices. Each chapter of the book brings its share of added value. This is not a book, this is truly a knowledge base.

In June 2003, spring 0.9 was released under Apache 2.0 license. In March 2004, spring 1.0 was released. Interestingly, even before 1.0 release, spring was widely adopted by developers. In August 2004, Rod Johnson, Juergen Hoeller, Keith Donald and Colin Sampaleanu co-founded interface21, a company focused on spring consulting, training and support.

Yann Caroff left the team in the early days. Rod Johnson left spring team in 2012. Juergen Hoeller is still an active member of spring development team.

Rapid Growth of Spring Framework
Spring framework rapidly evolved since the 1.0 release in 2004. Spring 2.0 was released in October 2006 and by that time spring downloads crossed the 1 million mark. Spring 2.0 had features such as extensible XML configuration which was used to simplify XML configuration, support for Java 5, additional IoC container extension points, support for dynamic languages such as groovy, aop enhancements and new bean scopes.

Interface21 company which managing the spring projects under Rod’s leadership was renamed to SpringSource in November 2007. At the same time Spring 2.5 was released. Major new features in spring 2.5 included support for Java 6/Java EE 5, support for annotation configuration, component auto-detection in classpath and OSGi compliant bundles.

In 2007, SpringSource secured series A funding ($10 million) from benchmark capital. SpringSource raised additional capital in 2008 through series B funding from accel partners and benchmark. SpringSource acquires a number of companies during this timeframe (Covalent, Hyperic, G2One etc.). In August 2009, SpringSource was acquired by VMWare for $420 million! Within a few weeks SpringSource acquired cloud foundry, a cloud PaaS provider. In 2015, cloud foundry was moved to the not-for-profit cloud foundry foundation.

In December 2009, spring 3.0 was released. Spring 3.0 had a number of major features such as reorganized module system, support for spring expression language, java based bean configuration(JavaConfig), support for embedded databases such as HSQL, H2 and Derby, model validation/REST support and support for Java EE 6.

A number of minor versions of 3.x series was released in 2011 and 2012. In July 2012, Rod Johnson left the spring team. In April 2013, VMware and EMC create a joint venture called Pivotal with GE investment. All the spring application projects were moved to Pivotal.

In December 2013, Pivotal announced the release of spring framework 4.0. Spring 4.0 was major step forward for spring framework and it included features such as full support for Java 8, higher third party library dependencies(groovy 1.8+, ehcache 2.1+, hibernate 3.6+ etc.), Java EE 7 support, groovy DSL for bean definitions, support for websockets and  support for generic types as a qualifier for injecting beans.

A number of spring framework 4.x.x releases came out in the 2014 to 2017 period. The current spring framework version(4.3.7) was released in March 2017. Spring framework 4.3.8 is scheduled for release in April 2017 and it will be the last one in the 4.x series.

The next major version of spring framework is spring 5.0. It is currently scheduled for release in the last quarter of 2017. However this may change as it has a dependency on the JDK 9 release.

History of Spring Boot

In October 2012, Mike Youngstromā€ created a feature request in spring jira asking for support for containerless web application architectures in spring framework. He talked about configuring web container services within a spring container bootstrapped from the main method! Here is an excerpt from the jira request,

I think that Spring’s web application architecture can be significantly simplified if it were to provided tools and a reference architecture that leveraged the Spring component and configuration model from top to bottom. Embedding and unifying the configuration of those common web container services within a Spring Container bootstrapped from a simple main() method.

This request lead to the development of spring boot project starting sometime in early 2013. In April 2014, spring boot 1.0.0 was released. Since then a number of spring boot minor versions came out,

  • Spring boot 1.1 (June 2014) – improved templating support, gemfire support, auto configuration for elasticsearch and apache solr.
  • Spring boot 1.2 (March 2015) – upgrade to servlet 3.1/tomcat 8/jetty 9, spring 4.1 upgrade, support for banner/jms/SpringBootApplication annotation.
  • Spring boot 1.3 (December 2016) – spring 4.2 upgrade, new spring-boot-devtools, auto configuration for caching technologies(ehcache, hazelcast, redis, guava and infinispan) and fully executable jar support.
  • Spring boot 1.4 (January 2017) – spring 4.3 upgrade, couchbase/neo4j support, analysis of startup failures and RestTemplateBuilder.
  • Spring boot 1.5 (February 2017) – support for kafka/ldap, third party library upgrades, deprecation of CRaSH support and actuator loggers endpoint to modify application log levels on the fly.

The simplicity of spring boot lead to quick large scale adoption of the project by java developers. Spring boot is arguably one of the fastest ways to develop REST based microservice web applications in java. It is also very suitable for docker container deployments and quick prototyping.

Spring IO and Spring Boot

In June 2014, spring io 1.0.0 was released. Spring io represents a predefined set of dependencies between application libraries (includes spring projects and third party libraries). This means that if you create a project using a specific spring IO version, you no longer needs to define versions of the libraries you use! Note that this includes spring libraries and most of the popular third party libraries. Even spring boot starter projects are part of this spring io. For example, if you are using spring io 1.0.0, you don’t need to specify spring boot version when adding dependencies of starter project. It will automatically assume it to be spring boot 1.1.1.RELEASE.

Conceptually spring io consists of a foundation layer of modules and execution layer domain specific runtimes(DSRs). The foundation layer represents the curated list of core spring modules and third party dependencies. Spring boot is one of the execution layer DSRs provided by spring IO. Hence now there are two main ways to build spring applications,

  • Use spring boot directly with or without using spring io.
  • Use spring io with required spring modules.

Note that usually whenever a new spring framework version is released, it will trigger a new spring boot release. This will in turn trigger a new spring io release.

In November 2015, Spring io 2.0.0 was released. This provided an updated set of dependencies including spring boot 1.3. In July 2016, spring io team decided to use alphabetical versioning scheme. Spring IO uses city names for this. In the alphabetical versioning scheme, a new name indicates minor and major upgrades to the dependency libraries. Hence depending on the individual components used, your application may require modifications. However the service releases under a new name always indicates a maintenance release and hence you could use it without breaking your code.

In September 2016, Athens, the first spring io platform release with alphabetical city naming was released. It contained spring boot 1.4 and other third party library upgrades. Since then a number of service releases for Athens was released (SR1, SR2, SR3 and SR4).

In March 2017, the latest spring io platform (Brussels-SR1) was released. It uses the latest spring boot release(1.5.2.RELEASE). The next spring io platform is Cairo scheduled for release with spring boot 2.0 and spring framework 5.0.

Future of Spring

Spring 5.0 is scheduled for release in last quarter of 2017 with JDK 9 support. Spring 5.0 release is a pre-requisite for spring boot 2.0 release. Spring io cairo requires release of spring boot 2.0. If everything goes well with the JDK 9 release, all the above versions should be available before 2018.

JDK 9 => Spring 5.0 => Spring Boot 2.0 => Spring IO Cairo.

Spring Timeline Infographic

Check out the following infographic for a quick look at the spring history.

spring-history-infographic