Protected: openshift_test

This content is password protected. To view it please enter your password below:

Advertisements
Posted in Uncategorized | Enter your password to view comments.

Hadoop cluster deployment in eucalyptus cloud using puppet.

 

I would recommend to read this great article to get a basic understanding on concepts of hadoop 

http://bradhedlund.com/2011/09/10/understanding-hadoop-clusters-and-the-network/

Another great article on deploying hadoop clusters.

http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html

 

An short description on how application deployment can be automated on eucalyptus cloud using puppet.

https://bennojoy.wordpress.com/2012/11/09/automating-application-deployment-using-puppet-in-eucalyptusaws-cloud/

 

 

This article list the manifests that were used to automate deployment of a two node Hadoop cluster. One node acts as the master( name node,datanode & Jobtracker,task tracker) while the second node runs as slave (datanode,task tracker).

As the first step we create an instance in eucalyptus passing user data variable ‘role’ with value hadoopmaster which would be later read by puppet master to determine which manifests to be sent down to client.

euca-run-instances -k benkey emi-0AA43BEE -t m1.large -d “role=hadoopmaster”

A script in rc.local of the template assigns hadoopmaster as a facter in the running instance and contacts the puppet master.

curl –retry 3 –retry-delay 10 -o $TMP_FILE http://10.101.1.118:8773/latest/user-data

if [ -s $TMP_FILE ]; then

echo “Downloaded user data in $TMP_FILE”

if [ “`head -c 2 $TMP_FILE`” = “#!” ]; then

chmod a+x $TMP_FILE

echo “User data is a script: executing it”

sh $TMP_FILE

fi

ROLE=`cat $TMP_FILE | grep role | cut -d= -f2 | col -b`

cat > /usr/lib/ruby/site_ruby/1.8/facter/bentest.rb  <<EOF

Facter.add(:role) do

  setcode do

    role = ‘$ROLE’

  end

end

EOF

 

Once the puppet master is contacted by the puppet agent in the running instance, the puppet server check for the faster value in role and send back appropriate configurations.

 

[root@e-20 modules]# cat /etc/puppet/modules/vmnodes/manifests/classifiers.pp 

class vmnodes::classifiers{

 

case $role {

“dbserver” : {

include “mysql”

}

}

case $role {

“appserver” : {

include “httpd”

}

}

case $role {

“hadoopmaster” : {

include “hadoopmaster”

}

}

case $role {

“hadoopslave” : {

include “hadoopslave”

}

}

}

Here’s the manifest that would be sent down to the hadoopmaster node.

 

[root@e-20 modules]# cat /etc/puppet/modules/hadoopmaster/manifests/init.pp 

class hadoopmaster{

 

#install the JDK package

        package { “java-1.6.0-openjdk-devel”:

                        ensure  => “present”,

        }

#install the hadoop rpm

package { ‘hadoop’:

  ensure          => installed,

  provider        => ‘rpm’,

            source          => ‘http://apache.techartifact.com/mirror/hadoop/common/stable/hadoop-1.0.4-1.x86_64.rpm&#8217;,

  require   => Package[“java-1.6.0-openjdk-devel”],

}

#The env file for hadoop where appropriate java_home is configured.

file { “/etc/hadoop/hadoop-env.sh”:

  source => “puppet:///hadoopmaster/hadoop-env.sh”,

                 require => Package[“hadoop”],

mode => “0755”,

}

#the configured core-site file, (should be tweaked according to you site needs,) the below files are hadoop configuration files and the contents depends based on the deployment.

file { “/etc/hadoop/core-site.xml”:

  source => “puppet:///hadoopmaster/core-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/hdfs-site.xml”:

  source => “puppet:///hadoopmaster/hdfs-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/mapred-site.xml”:

  source => “puppet:///hadoopmaster/mapred-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

#Contains the hostnane of the node which would act as the master.

file { “/etc/hadoop/masters”:

  source => “puppet:///hadoopmaster/masters”,

                 require => Package[“hadoop”],

mode => “0755”,

}

#contains the hostnames for the slaves.

file { “/etc/hadoop/slaves”:

  source => “puppet:///hadoopmaster/slaves”,

                 require => Package[“hadoop”],

mode => “0755”,

}

 

#Public key for pas wordless access to slaves by the master node.

       file { “/root/.ssh/id_dsa”:

  source => “puppet:///hadoopmaster/id_dsa”,

mode => “0600”,

}

file { “/root/.ssh/id_dsa.pub”:

  source => “puppet:///hadoopmaster/id_dsa.pub”,

mode => “0644”,

}

file { “/root/.ssh/authorized_keys”:

  source => “puppet:///hadoopmaster/authorized_keys”,

mode => “0400”,

}

#make the hostnane of the master server as master and the ip address would be private ip, we can’t make the public ip here as the java process’s wouldnt be able to bind to that ip. on the slave nodes this would point to public ip of the master node.(use elastic IP’s here).

 

host { “master”:

ip => “$ipaddress”,

name => “master”,

}

#host entry for slave, use elastic ip here.

host { “slave”:

ip => “10.101.5.100”,

name => “slave”,

}

#formats the hadfs filesytem.

exec { “hadoop namenode -format”:

  creates => “/tmp/hadoop-root/dfs”,

  path    => [“/bin”, “/usr/bin”, “/usr/sbin”],

                 require => Package[“hadoop”],

}

 

#the rpm doesn’t set the execute permission on the startup scripts so we do it using puppet.

exec { “chmod 755 /usr/sbin/*.sh”:

  path    => [“/bin”, “/usr/sbin”],

                 require => Package[“hadoop”],

}

 

The manifest for the hadoopslave is as follows, only difference in host entries for master and slave , master get the elastic ip instead of it’s private ip.

 

[root@e-20 modules]# cat /etc/puppet/modules/hadoopslave/manifests/init.pp 

class hadoopslave{

 

        package { “java-1.6.0-openjdk-devel”:

                        ensure  => “present”,

        }

 

package { ‘hadoop’:

  ensure          => installed,

  provider        => ‘rpm’,

            source          => ‘http://apache.techartifact.com/mirror/hadoop/common/stable/hadoop-1.0.4-1.x86_64.rpm&#8217;,

  require   => Package[“java-1.6.0-openjdk-devel”],

}

file { “/etc/hadoop/hadoop-env.sh”:

  source => “puppet:///hadoopslave/hadoop-env.sh”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/core-site.xml”:

  source => “puppet:///hadoopslave/core-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/hdfs-site.xml”:

  source => “puppet:///hadoopslave/hdfs-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/mapred-site.xml”:

  source => “puppet:///hadoopslave/mapred-site.xml”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/slaves”:

  source => “puppet:///hadoopslave/slaves”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/etc/hadoop/masters”:

  source => “puppet:///hadoopslave/masters”,

                 require => Package[“hadoop”],

mode => “0755”,

}

file { “/root/.ssh/authorized_keys”:

  source => “puppet:///hadoopslave/authorized_keys”,

mode => “0400”,

}

host { “slave”:

ip => “$ipaddress”,

name => “slave”,

}

host { “master”:

ip => “10.101.5.10”,

name => “master”,

}

 

}

 

Once both the instances are created and puppet configurations have been made. we can login the nodes start the services and run some map reduce jobs.

start the services, login to the master node and execute following command.

#starts the hdfs services on the master and slave.

-bash-3.2# start-dfs.sh 

starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-master.out

master: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-master.out

slave: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-slave.out

master: starting secondarynamenode, logging to /var/log/hadoop/root/hadoop-root-secondarynamenode-master.out

-bash-3.2# 

 

#start the map reduce process on the master, the master process automatically starts the process in slave also.

-bash-3.2# start-mapred.sh 

starting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-master.out

slave: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-slave.out

master: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-master.out

-bash-3.2# 

 

Check if the hadoop process are running.

#master

-bash-3.2# jps

10444 NameNode

10890 TaskTracker

10548 DataNode

10782 JobTracker

10678 SecondaryNameNode

10997 Jps

 

#slave

-bash-3.2# jps

9210 Jps

9115 TaskTracker

9023 DataNode

 

 

Once we confirm that the process are running we can run a map reduce job, here we will create a file with multiple occurrences of a word and use the tracker to count the number of occurrences of the word

 

 

-bash-3.2# cat > inputfile

abc

benno

asdf

sdf

benno

-bash-3.2# hadoop fs -put inputfile inputfile

-bash-3.2# hadoop jar /usr/share/hadoop/hadoop-examples-1.0.4.jar grep inputfile outputfile ‘benno’

12/11/15 02:00:25 INFO util.NativeCodeLoader: Loaded the native-hadoop library

12/11/15 02:00:25 WARN snappy.LoadSnappy: Snappy native library not loaded

12/11/15 02:00:25 INFO mapred.FileInputFormat: Total input paths to process : 1

12/11/15 02:00:25 INFO mapred.JobClient: Running job: job_201211150152_0001

12/11/15 02:00:26 INFO mapred.JobClient:  map 0% reduce 0%

12/11/15 02:00:41 INFO mapred.JobClient:  map 50% reduce 0%

12/11/15 02:00:44 INFO mapred.JobClient:  map 100% reduce 0%

12/11/15 02:00:53 INFO mapred.JobClient:  map 100% reduce 100%

12/11/15 02:00:58 INFO mapred.JobClient: Job complete: job_201211150152_0001

12/11/15 02:00:58 INFO mapred.JobClient: Counters: 30

12/11/15 02:00:58 INFO mapred.JobClient:   Job Counters 

12/11/15 02:00:58 INFO mapred.JobClient:     Launched reduce tasks=1

12/11/15 02:00:58 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=17305

12/11/15 02:00:58 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/11/15 02:00:58 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/11/15 02:00:58 INFO mapred.JobClient:     Launched map tasks=2

12/11/15 02:00:58 INFO mapred.JobClient:     Data-local map tasks=2

12/11/15 02:00:58 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=10413

12/11/15 02:00:58 INFO mapred.JobClient:   File Input Format Counters 

12/11/15 02:00:58 INFO mapred.JobClient:     Bytes Read=39

12/11/15 02:00:58 INFO mapred.JobClient:   File Output Format Counters 

12/11/15 02:00:58 INFO mapred.JobClient:     Bytes Written=108

12/11/15 02:00:58 INFO mapred.JobClient:   FileSystemCounters

12/11/15 02:00:58 INFO mapred.JobClient:     FILE_BYTES_READ=38

12/11/15 02:00:58 INFO mapred.JobClient:     HDFS_BYTES_READ=221

12/11/15 02:00:58 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=64896

12/11/15 02:00:58 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=108

12/11/15 02:00:58 INFO mapred.JobClient:   Map-Reduce Framework

12/11/15 02:00:58 INFO mapred.JobClient:     Map output materialized bytes=44

12/11/15 02:00:58 INFO mapred.JobClient:     Map input records=5

12/11/15 02:00:58 INFO mapred.JobClient:     Reduce shuffle bytes=22

12/11/15 02:00:58 INFO mapred.JobClient:     Spilled Records=4

12/11/15 02:00:58 INFO mapred.JobClient:     Map output bytes=28

12/11/15 02:00:58 INFO mapred.JobClient:     Total committed heap usage (bytes)=336338944

12/11/15 02:00:58 INFO mapred.JobClient:     CPU time spent (ms)=1270

12/11/15 02:00:58 INFO mapred.JobClient:     Map input bytes=25

12/11/15 02:00:58 INFO mapred.JobClient:     SPLIT_RAW_BYTES=182

12/11/15 02:00:58 INFO mapred.JobClient:     Combine input records=2

12/11/15 02:00:58 INFO mapred.JobClient:     Reduce input records=2

12/11/15 02:00:58 INFO mapred.JobClient:     Reduce input groups=1

12/11/15 02:00:58 INFO mapred.JobClient:     Combine output records=2

12/11/15 02:00:58 INFO mapred.JobClient:     Physical memory (bytes) snapshot=418250752

12/11/15 02:00:58 INFO mapred.JobClient:     Reduce output records=1

12/11/15 02:00:58 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2176126976

12/11/15 02:00:58 INFO mapred.JobClient:     Map output records=2

12/11/15 02:00:59 INFO mapred.FileInputFormat: Total input paths to process : 1

12/11/15 02:00:59 INFO mapred.JobClient: Running job: job_201211150152_0002

12/11/15 02:01:00 INFO mapred.JobClient:  map 0% reduce 0%

12/11/15 02:01:14 INFO mapred.JobClient:  map 100% reduce 0%

12/11/15 02:01:20 INFO mapred.JobClient:  map 100% reduce 100%

12/11/15 02:01:25 INFO mapred.JobClient: Job complete: job_201211150152_0002

12/11/15 02:01:25 INFO mapred.JobClient: Counters: 30

12/11/15 02:01:25 INFO mapred.JobClient:   Job Counters 

12/11/15 02:01:25 INFO mapred.JobClient:     Launched reduce tasks=1

12/11/15 02:01:25 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=13221

12/11/15 02:01:25 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/11/15 02:01:25 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/11/15 02:01:25 INFO mapred.JobClient:     Launched map tasks=1

12/11/15 02:01:25 INFO mapred.JobClient:     Data-local map tasks=1

12/11/15 02:01:25 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=4230

12/11/15 02:01:25 INFO mapred.JobClient:   File Input Format Counters 

12/11/15 02:01:25 INFO mapred.JobClient:     Bytes Read=108

12/11/15 02:01:25 INFO mapred.JobClient:   File Output Format Counters 

12/11/15 02:01:25 INFO mapred.JobClient:     Bytes Written=8

12/11/15 02:01:25 INFO mapred.JobClient:   FileSystemCounters

12/11/15 02:01:25 INFO mapred.JobClient:     FILE_BYTES_READ=22

12/11/15 02:01:25 INFO mapred.JobClient:     HDFS_BYTES_READ=221

12/11/15 02:01:25 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=42503

12/11/15 02:01:25 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=8

12/11/15 02:01:25 INFO mapred.JobClient:   Map-Reduce Framework

12/11/15 02:01:25 INFO mapred.JobClient:     Map output materialized bytes=22

12/11/15 02:01:25 INFO mapred.JobClient:     Map input records=1

12/11/15 02:01:25 INFO mapred.JobClient:     Reduce shuffle bytes=0

12/11/15 02:01:25 INFO mapred.JobClient:     Spilled Records=2

12/11/15 02:01:25 INFO mapred.JobClient:     Map output bytes=14

12/11/15 02:01:25 INFO mapred.JobClient:     Total committed heap usage (bytes)=176033792

12/11/15 02:01:25 INFO mapred.JobClient:     CPU time spent (ms)=720

12/11/15 02:01:25 INFO mapred.JobClient:     Map input bytes=22

12/11/15 02:01:25 INFO mapred.JobClient:     SPLIT_RAW_BYTES=113

12/11/15 02:01:25 INFO mapred.JobClient:     Combine input records=0

12/11/15 02:01:25 INFO mapred.JobClient:     Reduce input records=1

12/11/15 02:01:25 INFO mapred.JobClient:     Reduce input groups=1

12/11/15 02:01:25 INFO mapred.JobClient:     Combine output records=0

12/11/15 02:01:25 INFO mapred.JobClient:     Physical memory (bytes) snapshot=240148480

12/11/15 02:01:25 INFO mapred.JobClient:     Reduce output records=1

12/11/15 02:01:25 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1390665728

12/11/15 02:01:25 INFO mapred.JobClient:     Map output records=1

-bash-3.2# hadoop fs -get outputfile outputfile

-bash-3.2# cat outputfile/part-00000 

2 benno

-bash-3.2#

 

 

 

The whole process of building the hadoop cluster took around 10 minutes, Agility manifested.  A video of the same is also available here.

Automating application deployment using puppet in eucalyptus/aws cloud.

 

 

 

In this example a user can pass userdata parameter which would decide the role of the emi/ami that would be launched, and based on the role applications would be deployed by the 

puppet master.

 

Prerequisites:

 A working puppet master server

an ami/emi with puppet client installed.

 

 

Steps:

 

On the ami/emi template update the rc.local as following:

TMP_FILE=”/tmp/user-data-$$”

 

curl –retry 3 –retry-delay 10 -o $TMP_FILE http://10.101.1.118:8773/latest/user-data

if [ -s $TMP_FILE ]; then

echo “Downloaded user data in $TMP_FILE”

if [ “`head -c 2 $TMP_FILE`” = “#!” ]; then

chmod a+x $TMP_FILE

echo “User data is a script: executing it”

sh $TMP_FILE

fi

ROLE=`cat $TMP_FILE | grep role | cut -d= -f2 | col -b`

cat > /usr/lib/ruby/site_ruby/1.8/facter/bentest.rb  <<EOF

Facter.add(:role) do

  setcode do

    role = ‘$ROLE’

  end

end

EOF

PUPPET=`cat $TMP_FILE | grep puppet | cut -d= -f2`

echo “$PUPPET puppet” >> /etc/hosts;

fi

 

 

What this does is creates a custom fact in the vm called as ‘role’ and assigns a value to it, this fact would be read by the manifests in the puppet master to decide what recipes should be pushed to the node.

it also makes an entry of the puppet master on the client node.(This is optional as you can define a hostname in the emi template and make sure the dns points it to right puppet server).

 

On the puppet master:

The first thing that the puppet master needs to do is identify the node and pass appropriate configurations, for this as we mentioned above the puppet master would look at the facter ‘role’ and decide the configuration to be sent.

 

sample on how to accomplish that:

 

site.pp

=====

 

/etc/puppet/site.pp

#Has all node configuration

import “nodes.pp”

import  “modules.pp”

# The filebucket option allows for file backups to the server

filebucket { main: server => ‘puppet’ }

 

# Set global defaults – including backing up all files to the main filebucket and adds a global path

File { backup => main }

Exec { path => “/usr/bin:/usr/sbin/:/bin:/sbin” }

 

/etc/puppet/manifests/nodes.pp

======================

 

node default{

include vmnodes::classifiers

}

 

module for vmode calssifers

=========================

 

/etc/puppet/modules/vmnodes/manifests/init.pp

import “classifiers.pp”

 

====Here it decides based on facter what configuration needs to be pushed.=============

 

/etc/puppet/modules/vmnodes/manifests/classifiers.pp

class vmnodes::classifiers{

 

case $role {

“dbserver” : {

include mysql

}

}

case $role {

“appserver” : {

include httpd

}

}

}

 

/etc/puppet/manifests/modules.pp

import vmnodes

 

Jclouds with eucalyptus

I was working on eucalyptus when i had to test an integration of eucalyptus with Jclouds, here’s a brief step that would users get started for integrating eucalyptus with Jcloud.

 

Installing Jclouds

==============

taken from: http://blog.phymata.com/2012/08/15/getting-started-with-jclouds/

$mkdir jclouds;cd jclouds

$curl -o lein.sh https://raw.github.com/technomancy/leiningen/stable/bin/lein

$ chmod u+x lein.sh
$ echo ‘(defproject deps “1” :dependencies [[org.jclouds/jclouds-all “1.5.1”] [org.jclouds.driver/jclouds-sshj “1.5.1”]])’ > project.clj
$ ./lein.sh deps
 
Sample code that lists all running instances in eucalyptus cloud + creates a security group + adds an ingress rule.
==========================================
 

import java.util.Set;
import java.lang.Thread.UncaughtExceptionHandler;
import java.util.Properties;

import org.jclouds.ContextBuilder;
import org.jclouds.compute.ComputeService;
import org.jclouds.compute.ComputeServiceContext;
import org.jclouds.compute.domain.ComputeMetadata;
import org.jclouds.logging.slf4j.config.SLF4JLoggingModule;
import org.jclouds.sshj.config.SshjSshClientModule;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Iterables;
import com.google.inject.Module;
import org.jclouds.compute.ComputeServiceContextFactory;
import org.jclouds.ec2.services.ElasticBlockStoreClient;
import org.jclouds.ec2.services.SecurityGroupClient;
import org.jclouds.ec2.EC2Client;
import org.jclouds.rest.RestContext;
import org.jclouds.ec2.EC2AsyncClient;
import org.jclouds.ec2.EC2Client;
import org.jclouds.ec2.domain.IpProtocol;

public class JCloudsTest {

public static void main(String[] args) {
String provider = “eucalyptus”;
String identity = “HEIV7IISYV0ZR3ZIQDCOO”;
String credential = “OtKODhCwjQ90dRQyuHFNHbRjToRmoRuMGC6gi44J”;
JCloudsTest jCloudsTest = new JCloudsTest();
JCloudsTest.init();

ComputeService compute = initComputeService(provider, identity, credential);
System.out.println(“Calling listNodes…”);
Set<? extends ComputeMetadata> nodes = compute.listNodes();

System.out.println(“Total Number of Nodes = ” + nodes.size());
for (ComputeMetadata node: nodes) {
System.out.println(“\t” + node);
}
JCloudsTest.initComputeSecurity(provider, identity, credential);

System.exit(0);
}

private static void init() {
Thread.setDefaultUncaughtExceptionHandler(new UncaughtExceptionHandler() {
public void uncaughtException(Thread t, Throwable e) {
e.printStackTrace();
System.exit(1);
}
});
}
private static ComputeService initComputeService(String provider, String identity, String credential) {
Properties properties = new Properties();
properties.setProperty(“eucalyptus.endpoint”, “http://10.101.1.118:8773/services/Eucalyptus&#8221;);
Iterable<Module> modules = ImmutableSet.<Module> of(
new SshjSshClientModule(),
new SLF4JLoggingModule());

ContextBuilder builder = ContextBuilder.newBuilder(provider)
.credentials(identity, credential)
.modules(modules)
.overrides(properties);
System.out.printf(“>> initializing %s%n”, builder.getApiMetadata());
return builder.buildView(ComputeServiceContext.class).getComputeService();
}
private static void initComputeSecurity(String provider, String identity, String credential) {
Properties properties = new Properties();
properties.setProperty(“eucalyptus.endpoint”, “http://10.101.1.118:8773/services/Eucalyptus&#8221;);
Iterable<Module> modules = ImmutableSet.<Module> of(
new SshjSshClientModule(),
new SLF4JLoggingModule());

ContextBuilder builder = ContextBuilder.newBuilder(provider)
.credentials(identity, credential)
.modules(modules)
.overrides(properties);
builder.getApiMetadata();
ComputeServiceContext context = builder.buildView(ComputeServiceContext.class);
RestContext<EC2Client, EC2AsyncClient> context1 = context.getProviderSpecificContext();
EC2Client client = context1.getApi();
SecurityGroupClient secClient = client.getSecurityGroupServices();
try{

secClient.createSecurityGroupInRegion(“eucalyptus”,”bentest”,”benno testing”);
secClient.authorizeSecurityGroupIngressInRegion(“eucalyptus”, “bentest”,IpProtocol.TCP, 5,65, “0.0.0.0/0”);
} catch (Exception e){
e.printStackTrace();
}
}

}

Complile

==========

javac -cp “.:lib/*” JCloudsTest.java

 

Run the code

=============

 

java -cp “.:lib/*” JCloudsTest