Random Alphanumeric Generator in Java

The following Java example program generates a cryptographically secure random alphanumeric string. If cryptographic security is not a requirement, the SecureRandom class can be replaced with Random class for a much faster implementation.

The following function can be used to generate random strings with a specified length. It is also possible to specify a subset of character set in the random string by modifying the CHARACTER_SET variable.

import java.security.SecureRandom;

// Example - Random alphanumeric generator in Java
public class RandomAlphaNumericGenerator {
    private static SecureRandom random = new SecureRandom();
    private static final String CHARACTER_SET="0123456789abcdefghijklmnopqrstuvwxyz"; 
    
    public static void main(String[] args) {
        System.out.println("Random Alphanumeric String:"+getRandomString(32));
    }

    // Create a random alphanumeric string
    private static String getRandomString(int len) {
        StringBuffer buff = new StringBuffer(len);
        for(int i=0;i<len;i++) {
            int offset = random.nextInt(CHARACTER_SET.length());
            buff.append(CHARACTER_SET.substring(offset,offset+1));
        }
        return buff.toString();
    }
}

How to Pick a Random Character in Java

The following Java program can be used to generate a random character in Java. There are three functions in the program.

  • getRandomCharacter() returns a random character in the ASCII printable character set.
  • getRandomAlphabet() returns a random alphabet in english (a – z).
  • getRandomAlphaNum() returns a random alphanumeric character (0 – 9 & a – z).
import java.util.Random;

// Example - Java class to generate random characters
public class RandomCharDemo {
    
    public static final String ALPHANUMERIC_CHARACTERS = "0123456789abcdefghijklmnopqrstuvwxyz";
    
    public static void main(String[] args) {
        System.out.println("Random character:"+getRandomCharacter());
        System.out.println("Random Alphabet:"+getRandomAlphabet());
        System.out.println("Random Alphanumeric:"+getRandomAlphaNum());
        
    }

    // Create a random alphanumeric character in Java
    // Random alphanumeric generator function in Java
    // Only lowercase letters
    private static String getRandomAlphaNum() {
        Random r = new Random();
        int offset = r.nextInt(ALPHANUMERIC_CHARACTERS.length());
        return ALPHANUMERIC_CHARACTERS.substring(offset, offset+1);
    }

    // Create a random alphabet in Java
    // Only lowercase letters
    private static String getRandomAlphabet() {
        Random r = new Random();
        return String.valueOf((char)(r.nextInt(26)+'a'));
    }

    // Create a random ASCII printable character in Java
    // Returns both lowercase and uppercase letters
    private static String getRandomCharacter() {
        Random r = new Random();
        return String.valueOf((char)(r.nextInt(95)+32));
    }
}

There are number of uses for these functions. These can be used for generating random passwords or for generating random words in a program.

How to Install Hadoop on Mac OS X El Capitan

This tutorial contains step by step instructions for installing hadoop 2.x on Mac OS X El Capitan. These instructions should work on other Mac OS X versions such as Yosemite and Sierra. This tutorial uses pseudo-distributed mode for running hadoop which allows us to use a single machine to run different components of the system in different Java processes. We will also configure YARN as the resource manager for running jobs on hadoop.

Hadoop Component Versions

  • Java 7 or higher. Java 8 is recommended.
  • Hadoop 2.7.3 or higher.

Hadoop Installation on Mac OS X Sierra & El Capitan

Step 1: Install Java

Hadoop 2.7.3 requires Java 7 or higher. Run the following command in a terminal to verify the Java version installed on the system.

java -version
Java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

If Java is not installed, you can get it from here.

Step 2: Configure SSH

When hadoop is installed in distributed mode, it uses a password less SSH for master to slave communication. To enable SSH daemon in mac, go to System Preferences => Sharing. Then click on Remote Login to enable SSH. Execute the following commands on the terminal to enable password less login to SSH,

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Step 3 : Install Hadoop

Download hadoop 2.7.3 binary zip file from this link (200MB). Extract the contents of the zip to a folder of your choice.

Step 4: Configure Hadoop

First we need to configure the location of our Java installation in etc/hadoop/hadoop-env.sh. To find the location of Java installation, run the following command on the terminal,

/usr/libexec/java_home
/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

Copy the output of the command and use it to configure JAVA_HOME variable in etc/hadoop/hadoop-env.sh.

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

Modify various hadoop configuration files to properly setup hadoop and yarn. These files are located in etc/hadoop.

etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

etc/hadoop/mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

etc/hadoop/yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME, HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, HADOOP_CONF_DIR, CLASSPATH_PREPEND_DISTCACHE, HADOOP_YARN_HOME, HADOOP_MAPRED_HOME
        </value>
    </property>
    <property>
        <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
        </name>
        <value>98.5</value>
    </property>
</configuration>

Note the use of disk utilization threshold above. This tells yarn to continue operations when disk utilization is below 98.5. This was required in my system since my disk utilization was 95% and the default value for this is 90%. If disk utilization goes above the configured threshold, yarn will report the node instance as unhealthy nodes with error "local-dirs are bad".

Step 5: Initialize Hadoop Cluster

From a terminal window switch to the hadoop home folder (the folder which contains various sub folders such as bin and etc). Run the following command to initialize the metadata for the hadoop cluster. This formats the hdfs file system and configures it on the local system. By default, files are created in /tmp/hadoop-<username> folder.

bin/hdfs namenode -format

It is possible to modify the default location of name node configuration by adding the following property in the hdfs-site.xml file. Similarly the hdfs data block storage location can be changed using dfs.data.dir property.

<property>
    <name>dfs.name.dir</name>
    <value>/usr/local/hadoop/dfs/name</value>
    <final>true</final>
</property>

The following commands should be executed from the hadoop home folder.

Step 6: Start Hadoop Cluster

Run the following command from terminal (after switching to hadoop home folder) to start the hadoop cluster.  This starts name node and data node on the local system.

sbin/start-dfs.sh

To verify that the namenode and datanode daemons are running, execute the following command on the terminal. This displays running Java processes on the system.

jps
19203 DataNode
29219 Jps
19126 NameNode
19303 SecondaryNameNode

Step 7: Configure HDFS Home Directories

We will now configure the hdfs home directories. The home directory is of the form – /user/<username>.  My user id on the mac system is jj. Replace it with your user name. Run the following commands on the terminal,

bin/hdfs dfs -mkdir /user
bin/hdfs dfs -mkdir /user/jj

Step 8: Run YARN Manager

Start YARN resource manager and node manager instances by running the following command on the terminal,

sbin/start-yarn.sh

Run jps command again to verify all the running processes,

jps
19203 DataNode
29283 Jps
19413 ResourceManager
19126 NameNode
19303 SecondaryNameNode
19497 NodeManager

Step 9: Verify Hadoop Installation

Access the URL http://localhost:50070/dfshealth.html to view hadoop name node configuration. You can also navigate the hdfs file system using the menu Utilities => Browse the file system.

hadoop-name-node-browser-console

Access the URL http://localhost:8088/cluster to view the hadoop cluster details through YARN resource manager.

hadoop-yarn-browser-console

Step 10: Run Sample MapReduce Job

Hadoop installation contains a number of sample mapreduce jobs. We will run one of them to verify that our hadoop installation is working fine.

We will first copy a file from local system to the hdfs home folder. We will use core-site.xml in etc/hadoop as our input,

bin/hdfs dfs -copyFromLocal etc/hadoop/core-site.xml .

Verify that the file is in HDFS folder by navigating to the folder from the name node browser console.

Let us run a mapreduce program on this hdfs file to find the number of occurrences of the word "configuration" in the file. A mapreduce program for word count is available in the hadoop samples.

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep ./core-site.xml output ‘configuration’

This runs the mapreduce on the hdfs file uploaded earlier and then outputs the results to the output folder inside the hdfs home folder. The file will be named as part-r-00000. This can be downloaded from the name node browser console or run the following command to copy it to the local folder.

bin/hdfs dfs -get output/* .

Print the contents of the file. This contains the number of occurrences of the word "configuration" in core-site.xml.

cat part*
3    configuration

Finally delete the uploaded file and the output folder from hdfs system,

bin/hdfs dfs -rmr output
bin/hdfs dfs -rm core-site.xml

Step 11: Stop Hadoop/YARN Cluster

Run the following commands to stop hadoop/YARN daemons. This stops name node, data node, node manager and resource manager.

sbin/stop-yarn.sh
sbin/stop-dfs.sh

How to Read HDFS File in Java

Hadoop distributed file system (HDFS) can be accessed using native Java API provided by hadoop Java library. The following example uses FileSystem API to read an existing file in an hdfs folder. Before running the following Java program, ensure that the following values are changed as per your hadoop installation.

  • Modify the HDFS_ROOT_URL to point to the hadoop IPC endpoint. This can be copied from the etc/hadoop/core-site.xml file.
  • Modify the hdfs file path used in the program. The following program prints the file input.txt located in /user/jj hdfs folder. The default hdfs home folder is named as /user/<username>. Ensure that a file is already uploaded to the hdfs folder. To copy input.txt from your hadoop folder to the dfs You can use the command "bin/hadoop dfs -copyFromLocal input.txt .".

Prerequisites

  • Java 1.8+
  • Gradle 3.x+
  • Hadoop 2.x

How to Read an HDFS File Using Gradle Java Project

Step 1: Create a simple gradle java project using the following command. This assumes that gradle is already installed on your system.

gradle init --type java-application

Step 2: Replace the file build.gradle with the following,

apply plugin: 'java-library'
apply plugin: 'application'

mainClassName = "HDFSDemo"

jar {
	manifest {
		attributes 'Main-Class': "$mainClassName"
	}
}
repositories {
    jcenter()
}

dependencies {
	compile 'org.apache.hadoop:hadoop-client:2.7.3'
}

Note the dependency on hadoop 2.7.3. Update this value if you are working with a different hadoop server version.

Step 3: Add the Java class HDFSDemo.java to the src/main/java folder. Delete App.java and AppTest.java from the project folder.

import java.io.InputStream;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;

// Sample Java program to read files from hadoop hdfs filesystem
public class HDFSDemo {

	// This is copied from the entry in core-site.xml for the property fs.defaultFS. 
	// Replace with your Hadoop deployment details.
	public static final String HDFS_ROOT_URL="hdfs://localhost:9000";
	private Configuration conf;

	public static void main(String[] args) throws Exception {
		HDFSDemo demo = new HDFSDemo();
		
		// Reads a file from the user's home directory.
		// Replace jj with the name of your folder
		// Assumes that input.txt is already in HDFS folder
		String uri = HDFS_ROOT_URL+"/user/jj/input.txt";
		demo.printHDFSFileContents(uri);
	}
	
	public HDFSDemo() {
		conf = new Configuration();
	}
	
	// Example - Print hdfs file contents to console using Java
	public void printHDFSFileContents(String uri) throws Exception {
		FileSystem fs = FileSystem.get(URI.create(uri), conf);
		InputStream in = null;
		try {
			in = fs.open(new Path(uri));
			IOUtils.copyBytes(in, System.out, 4096, false);
		} finally {
			IOUtils.closeStream(in);
		}
	}

}

Step 4: Build and run the application using the gradle wrapper command below. The contents of the hadoop hdfs file will be printed on the console.

./gradlew run

Hosting Static HTML Pages Using AWS Lambda and API Gateway

AWS Lambda and API Gateway are commonly used to create microservice JSON endpoints. In such applications, the static html content is usually stored in an S3 bucket. However if you are building a quick AWS lambda microservice prototype, it would be simpler to render static HTML from a lambda function. This has a number of advantages.

  • We just need one Java project containing all lambda functions and the static content.
  • We can host the microservice endpoints and static html on the same domain created by API gateway without any further configuration.

How to Host HTML Pages Using AWS Lambda and API Gateway

The following tutorial contains step by step instructions for hosting static html content using AWS lambda and API gateway. AWS lambda supports multiple language runtimes. The following sample code is written in Java and uses the AWS Java API.

The following lambda function reads an html file from the classpath and then prints it to the response object. Note that if you are building your Java project using Maven (which I highly recommend), this html file should be copied to src/main/resources folder.

package com.quickprogrammingtips.lambda;

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;


import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import com.amazonaws.util.IOUtils;

// Render static HTML from AWS Lambda function 
public class ShowHtmlLambdaHandler implements RequestStreamHandler {

	@Override
	public void handleRequest(InputStream is, OutputStream os, Context context) throws IOException {
        context.getLogger().log("Displaying html content" );
        try {
    		ClassLoader loader = ShowHtmlLambdaHandler.class.getClassLoader();
    		try(InputStream resourceStream = loader.getResourceAsStream("hello.html")) {
    			os.write(IOUtils.toByteArray(resourceStream));
    		}
        }catch(Exception ex) {
        	os.write("Error in generating output.".getBytes());
        }
	}
}

The following maven pom.xml contains the minimum dependencies for the above implementation. Use this pom file if you want to minimize the size of the fat jar uploaded to AWS lambda. Note the use of maven shade plugin to embed dependent jars in the final fat jar. if you don’t use shade plugin, you will get ClassNotFoundException when you run the lambda function in AWS.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.quickprogrammingtips.lambda</groupId>
	<artifactId>htmllambda</artifactId>
	<version>0.0.1-SNAPSHOT</version>

	<name>htmllambda</name>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<maven.compiler.source>1.8</maven.compiler.source>
		<maven.compiler.target>1.8</maven.compiler.target>
	</properties>

	<dependencies>
		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-java-sdk-s3</artifactId>
			<version>1.11.98</version>
		</dependency>

		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-lambda-java-core</artifactId>
			<version>1.1.0</version>
		</dependency>
		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-lambda-java-events</artifactId>
			<version>1.3.0</version>
		</dependency>
	</dependencies>
	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-shade-plugin</artifactId>
				<version>2.3</version>
				<configuration>
					<createDependencyReducedPom>false</createDependencyReducedPom>
				</configuration>
				<executions>
					<execution>
						<phase>package</phase>
						<goals>
							<goal>shade</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

Following is the content of the hello.html file,

<html>
	<head>
		<title>Hello World!</title>
	</head>
	<body>
		<h1>Hello World!</h1>
	</body>
</html>

Build the project using the maven package command (ensure that you have maven installed and available on system path in your machine),

mvn package

Upload the htmllambda-0.0.1-SNAPSHOT.jar created in target folder to AWS lambda as shown below. This assumes that a basic lambda role is already created using AWS IAM console. Ensure that this role (lambda_basic_execution) has at least the predefined policy AWSLambdaBasicExecutionRole attached to it. Once lambda function is created, click on the Test button to verify the html output.

create-lamba-1

create-lambda-2

We can now configure a simple API gateway endpoint to use the above lambda function to output static html.

From API gateway, click on Create API. Name the new API as htmldemo. From Actions menu, click on Create Method and select GET. Select the region where lambda is hosted and select the lambda function as shown below. Click Save.

link-api-gateway-lambda

From Actions menu, click on Deploy API. Whenever any change is made to the API configuration, it needs to be deployed before it is available at the URL endpoint. This allows us to deploy different configurations to different stages (production, staging, qa etc.). When you deploying for the first time, you need to create a stage. Give it a name beta. This will enable the API on a public URL as shown below,

api-gateway-deploy

Click on the URL to open it in a browser window.

hello-world-json

Browser renders our hello world html as raw text! This is due to the default JSON content type set by API gateway. Also note that our content has double quotes around it. Let us now modify API gateway configuration to remove double quotes and set our content type as html.

Click on GET under resources for htmldemo API. Click on Method Response on the right window. Remove application/json entry and then add Content-Type header as shown below.

api-gateway-method-response

Click on GET under resources for htmldemo API. Click on Integration Response on the right window. Under Header Mappings, configure mapping value for Content Type as ‘text/html’. Please note that single quotes must be preserved in the text. Under Body Mapping Templates, remove application/json and then add text/html with the following template content,

$input.path(‘$’)

api-gateway-integration-response

Now deploy the changed configuration. From Actions menu, click on Deploy API and deploy it to the beta stage we created earlier. Wait for a few seconds and then click on the API URL. If everything goes well, you should see the Hello World! output (beware of the browser cache!) in the browser.

hello-world-html-output