Repository Management with Nexus 3 for your Mavenized project, including release and snapshot distribution

Like the Nexus documentation says;

Stop developing in the Dark Ages, read this book, and start using a repository manager. Trust us, once you start using a Nexus Repository Manager, you’ll wonder how you ever functioned without it.


A. Download the archive from

B. Unzip it into a folder and run it as follows

cd ~\nexus\nexus-3.3.1-01\bin
nexus.exe /run 

If the log shows the following that means the server is up  
 Started Sonatype Nexus OSS 3.3.1-01

C. Server starts by default on http://localhost:8081

username: admin  
password: admin123 
Use the above credentials to login as the default administrator

D. Add the following configuration to the ~\USER_HOME\.m2\settings.xml

Make sure you remove the code tags before using this configuration, which is used here for wordpress content formatting only.

		  <!--This sends everything else to /public -->
		  <!--Enable snapshots for the built in central repo to direct -->
		  <!--all requests to nexus via the mirror -->
		<!--make the profile active all the time -->

E. Release and snapshot artifacts should be configured in the projects pom as distributionManagement


F. The clean and deploy goal in your Java project will build and upload the artifacts to the repository using the server credentials tag from settings.xml

mvn clean deploy -DskipTests


G. Add a proxy repository

You can add a new proxy repository to your Nexus instance using the following steps

  1. Create a repository from the repositories admin page
  2. Select the maven2 recipe since JBOSS is a maven like repository
  3. Provide a name like “jboss-nexus-repository”
  4. Add this repository to the group you have defaulted your maven to, so that maven can use this as a part of the group it is defaulted to.


H. Adding your custom jars into the repository

  1. Create a repository with maven2 hosted recipe
  2. Obtain the created repository URL and run the following maven deploy command on your jar file
 mvn deploy:deploy-file 
-DartifactId=ojdbc6 -Dversion= 
-Dpackaging=jar -Dfile=C:/Users/vishwaka/.m2/ojdbc6.jar 
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] --- maven-deploy-plugin:2.7:deploy-file (default-cli) @ standalone-pom ---
Uploading: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/
Uploaded: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/ (1942 KB at 583.2 KB/sec)
Uploading: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/
Uploaded: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/ (392 B at 0.1 KB/sec)
Downloading: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/maven-metadata.xml
Downloaded: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/maven-metadata.xml (302 B at 0.2 KB/sec)
Uploading: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/maven-metadata.xml
Uploaded: http://localhost:8081/repository/project-customs/com/oracle/ojdbc6/maven-metadata.xml (302 B at 0.1 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15.768 s
[INFO] Finished at: 2017-06-15T11:29:59-07:00
[INFO] Final Memory: 11M/245M
[INFO] ------------------------------------------------------------------------


You should be able to see this in your repositories assets once the upload is successful. The upload deploy uses credentials from your server.xml configuration so make sure that is available.

3. Upon doing this we need to add the project-custom repository as a member to the maven-public group of repositories


I. Test by running a clean build of your maven project

  • Delete the folder containing the jar files in the path \.m2\repository\com\oracle\ojdbc6\
  • Rerun the maven build using mvn clean compile
  • Verify the following logs in the build
Downloading: http://localhost:8081/repository/maven-public/com/oracle/ojdbc6/
Downloaded: http://localhost:8081/repository/maven-public/com/oracle/ojdbc6/ (392 B at 2.8 KB/sec)
Downloading: http://localhost:8081/repository/maven-public/com/oracle/ojdbc6/
Downloaded: http://localhost:8081/repository/maven-public/com/oracle/ojdbc6/ (1942 KB at 10846.1 KB/sec)

Continuous Integration in Pipeline as Code Environment with Jenkins, JaCoCo, Nexus and SonarQube

Github Link for the source code:

Here we discuss the setup for a Continuous integration pipeline. This is for mavenized Spring boot build with JaCoCo coverage reports and Sonar metrics. I used a windows machine with Tomcat 8 for hosting jenkins, but similar setup can be done on any OS where Sonar server can run on the same system.

A. Get the following artifacts on the system

  1. Tomcat server with Java JDK – Configure the server.xml to run on port 8099
  2. Setup Maven & other build utilities on your machine
  3. Access to Github source code
  4. Source code should have the Jenkinsfile in project root to be used by the pipeline
  5. Source should have the in project root for the SonarQube project linkage & source paths


Jenkinsfile and snapshot

B. Setup & Startup SonarQube

  1. Download the SonarQube package from
  2. Start sonar server: SONAR_HOME\bin\windows-x86-32\StartSonar.bat (for 32 bit Windows)
  3. Open Sonar admin page “http://localhost:9000“. Default credentials – admin/admin
  4. Create user in security tab and generate an access token, 50997f4a8c26d5698cccee30cf398c0ed9b98de0
  5. Create a project SPRINGBOOT with a key
  6. Download SonarQube scanner from
  7. Additional configuration from

C. Setup & Startup Tomcat

  1. Download jenkins.war from
  2. Put the jenkins.war file in webapps folder of Tomcat home
  3. Set Environment Variables as follows,
  4. SET JENKINS_HOME=”C:/Users/vishwaka/Documents/Workspace/git/jenkinstest/cisetup/jenkins_home”
  5. SET CATALINA_OPTS=”-DJENKINS_HOME=C:/Users/vishwaka/Documents/Workspace/git/jenkinstest/cisetup/jenkins_home”
  6. Start the server using startup.bat


Initial launch of Jenkins

D. Initialize Jenkins

  1. Access Jenkins at http://localhost:8099/jenkins
  2. Provide the initial credentials from jenkins_home/secrets/initialPassword*
  3. Install the default set of plugins and proceed
  4. Create a user for this installation
  5. Use “New Item” for creating a pipeline and provide the Jenkinsfile pipeline script from Git SCM for this


Create pipeline project

E. Plugin & Configuration to Jenkins

  1. Add the “JaCoCo plugin” through the Manage Jenkins > Manage Plugins and install without restart
  2. Add “SonarQube Scanner for Jenkins” through the same Plugin Manager as above
  3. Go to the Manage Jenkins > Configure system and provide the credentials for Sonar Server
  4. Add the “SonarQube Server” name running on URL http://localhost:9000 alongwith user authentication key generated in SonarQube Server user administration page
  5. Remove the auto install option and add the “Sonar Scanner” env variable SONAR_RUNNER_HOME installation path as $JENKINS_HOME/sonar-scanner- through “Global Tool Configuration”
  6. Make sure the Sonar scanner path is configured properly as its path is hard coded in Jenkinsfile.


Global Tool Configuration

F. Run the Build now for this pipeline

  1. The pipeline is at http://localhost:8099/jenkins/job/JENKINS-BOOT/JenkinsStatusPipeline
  2. Checkout the coverage report within the pipeline reports JenkinsJacoco
  3. You can also look at the Sonar reports at http://localhost:9000/dashboard?id=JENKINSBOOT JenkinsToSonar
  4. If you have many such projects then its better to execute all your Job Pipelines from a parent Job Pipeline. You can create one and call it “BUILD-ALL-JOBS”. It can be configured using the below pipeline script to run your JENKINS-BOOT job described in the example above as well as any other fictitious job call JENKINS-BOOT-XXX.
node {
    stage('JENKINS-BOOT-STAGE-A') {
        build job: 'JENKINS-BOOT'
    stage('JENKINS-BOOT-STAGE-B') {
        build job: 'JENKINS-BOOT-XXX'

There are plugins to build jobs in parallel as well but that depends on what workflow you want to build in your system.

G. Adding Nexus repository management capability to your CI environment from my blog

Click on the text link below:

Repository Management with Nexus 3 for your Mavenized project, including release and snapshot distribution

H. Finally put everything into a script that can run it all

Pardon my naive & careless script, considering my setup is on a local windows development workstation.

@echo off
echo "--------------------------------------------------------------------------"
echo "------------------------- CI STARTUP SCRIPT ------------------------------"
echo "--------------------------------------------------------------------------"

echo "Startup SonarQube Server"
echo "------------------------"
START CMD /C "cd c:\Dock\ci\sonar\sonarqube-6.4\bin\windows-x86-64 & CALL StartSonar.bat"
echo "Sonar may be up on http://localhost:9000/"

echo "Startup Nexus Repository Manager"
echo "--------------------------------"
START CMD /C "cd c:\Dock\ci\nexus\nexus-3.3.1-01\bin & nexus.exe /run"
echo "Nexus may be up on http://localhost:8081/"

echo "Startup Jenkins on Tomcat"
echo "-------------------------"
START CMD /C "cd c:\Dock\ci\jenkins\apache-tomcat-8.5.15\bin & startup.bat"
echo "Jenkins may be up on http://localhost:8099/jenkins"

echo "-------------------------------- END -------------------------------------"




Simplify Tomcat/JVM Monitoring with Mission Control (JMX) or VisualVM

A. JMX Mission Control

Oracle Java Mission Control enables you to monitor and manage Java applications without introducing the performance overhead normally associated with these types of tools. It uses data collected for normal adaptive dynamic optimization of the Java Virtual Machine (JVM). Besides minimizing the performance overhead, this approach eliminates the problem of the observer effect, which occurs when monitoring tools alter the execution characteristics of the system.

1. Server setup

> Provide the JMX configuration to Tomcat server 
> Create a file in $CATALINA_HOME/bin 
> Add the following entry to the script file 
   export CATALINA_OPTS=" \ \ \ \"
> This will enable JMX listener on port 3614 when tomcat is restarted
> Make sure that this port is open and accessible to outside world. 
  This may have security concerns hence its not advisable for production environment.
> Restart the server to allow the properties to be set and initialized.

2. Mission Control setup

Download: mission control
In my test I had used an eclipse plugin available at
> Just added this plugin to the eclipse using Install new Software
> Launch a new connection to the JVM and provide the IP and port on which the jmx remote system is running.



B. Alternate way is to use VisualVM

VisualVM is a visual tool integrating several commandline JDK tools and lightweight profiling capabilities.

Here as well we need to start jstatd daemon on the server which opens up connections for the visualvm client and is packaged with the JDK.


1. Start the jstatd daemon

> Make sure the default RMI port is open as per the javase documentation
> Create a policy file named jstatd.all.policy and copy the following content to it
  grant codebase "file:${java.home}/../lib/tools.jar" {

> Start the daemon 

> Alternate option to run this silently
  nohup jstatd &>/dev/null &

2. Start the VisualVM Client

> Start the Visual VM client and add remote host using its IP
> You will be able to monitor the jvm on that machine




Java Integration with Kafka distributed message broker

Kafka is a major distributed, partitioned, replicated, commit log service used as a message  broker in the current tech industry open-sourced by Linked-In. It functions as a central messaging bus for various domains and frameworks specializing in big-data systems.

It works as a broker between the producers and consumers with a strong order guarantee than any traditional messaging system. It provides a single consumer abstraction model that generalizes the queuing and publish-subscribe model as a Consumer-Group.



The API is provided by Kafka-clients version 

Here I am going through a sample that is capable of publishing a message over the kafka topic. There is a listener constantly waiting with a consumer group that works as a thread pool subscribed to the topic.

A. Setting up kafka is as simple as extracting the archive and starting the server.



Say you have a virtual machine as per my setup and the host IP as

  1. Download and extract the above kafka-2.11-0.9 build in /opt folder, and set the KAFKA_HOME variable
  2. Configure $KAFKA_HOME/config/

kafka broker internals

If you are into spring and java then this is a great introduction resource from SpringDeveloper. However in this tutorial I am covering a very basic implementation.


B. Walk-through of the Producer and Listener


The code has two main classes responsible for the send ( and listen ( functionality. KafkaMain is responsible for launching the KafkaListener in a pre-configured consumer threadpool.

The producer is configured with the broker url and client identifier.

//The kafka producer is configured with the topic and broker address for sending the message as follows: 

 Properties props = new Properties();
 props.put("bootstrap.servers", prop.getProperty("producer.url"));
 props.put("", prop.getProperty(""));
 props.put("acks", "all");
 props.put("retries", 0);
 props.put("batch.size", 16384);
 props.put("", 1);
 props.put("buffer.memory", 33554432);
 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
 props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
 this.producer = new KafkaProducer<>(props);

The producer can send the message on a topic using the send api

producer.send(new ProducerRecord<String, String>(topic,

The listener has the concept of consumer group that can listen on each partition and transact on the messages queued for consumption and further processing.


The consumer group is created using the threadpool executors as follows,

final ExecutorService executor = Executors.newFixedThreadPool(numConsumers);
final List<KafkaListener> consumers = new ArrayList<>();
for (int i = 0; i < numConsumers; i++) {
 KafkaListener consumer = new KafkaListener(i, prop, topics);

Configure the listener and create a consumer

props.put("bootstrap.servers", prop.getProperty("consumer.url"));
props.put("", prop.getProperty("")); 
props.put("", prop.getProperty(""));
props.put("", "true");
props.put("", "2000");
props.put("", "30000");
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
this.consumer = new KafkaConsumer<>(props);
this.timeOut = Integer.parseInt( prop.getProperty("consumer.poll.timeout"));

Subscribe and listen infinitely on the topic until the shutdown hook is executed for this thread pool.

while (true) {
 ConsumerRecords<String, String> records = consumer.poll(this.timeOut);
 for (ConsumerRecord<String, String> record : records) {" - " + ", Data: " + record.value());


The code available in github is self-sufficient and simple enough and was very simple to create.

Thanks to the great article that set me of on this trial project.

Spring Rest API and Mongodb with GridFS



Hello draft of the Mongo DB based file data store service


There are two more branches apart from the master branch which have the file upload functionality

  1. Checkout the branch
  2. Pull the current branch
    git pull
  3. After making changes in any of them index and commit
    git add .
    git commit -m “Updated the code and added a message”
  4. Push changes to the github repository
    git push origin FileUploadGridFSSpring
    git push origin MultiPartFileUpload


Download :

  1. Unzip the zip contents in C:\mongodb\
  2. create C:\data\db
  3. Execute the C:\mongodb\bin\mongod.exe –dbpath C:\data\db


//The following command simply pre-allocates a 2 gigabyte, uncapped collection named people. db.createCollection(“files”, { size: 2147483648 }) { fileId: ‘1235’, fileName: ‘V_XXX.EXE’, filePath: ‘/opt/storage/rhldata’, fileSizeInKB: 123342, fileExtensionType: ‘EXE’ })


  1. Start the mongod.exe standalone server
  2. Import the source as a maven project in Eclipse STS IDE
  4. Deploy on a tomcat instance to see the data from mongodb
  5. An alternate plugin for tomcat enables maven based initialization. mvn tomcat7:run
  6. Open up http://localhost:8088/RDataServe to checkout the grid for file data


Mongo Shell Cmds

show dbs show collections

//This command creates a collection named file with a maximum size of 5 megabytes and a maximum of 5000 documents. db.createCollection(“files”, { capped : true, size : 5242880, max : 5000 } )

//The following command simply pre-allocates a 2 gigabyte, uncapped collection named people. db.createCollection(“files”, { size: 2147483648 })

//Drop a collection capped db.files.drop()

//Insert db.files.insert( { _id: 1, fileId: 1234, fileName: ‘R_XXX.EXE’, filePath: ‘/opt/storage/rhldata’, fileSizeInKB: 123412, fileExtensionType: ‘EXE’ }) { fileId: ‘1235’, fileName: ‘V_XXX.EXE’, filePath: ‘/opt/storage/rhldata’, fileSizeInKB: 123342, fileExtensionType: ‘EXE’ })

//Query db.files.find({fileId:1234})

A quick setup for tomcat 7 on CentOS 6. Also, added the SSL configuration with self-signed certificates to run tomcat 7 on HTTPS secured SSL layer

Setup tomcat

1.) Pre-requisite:

Since Java is a major requirement

$ yum install java-1.7.0-openjdk-devel.x86_64

Add the JAVA_HOME environment variable to ~/.bashrc file 
  #Env variables for java
  export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64
  export CATALINA_HOME=/opt/tomcat7
  export PATH=$PATH:$JAVA_HOME/bin

Open the ports that will be used by tomcat for service

Flush the tables before config
$ iptables -F
$ iptables -t nat -F

Now setup INPUT ports
$ iptables -I INPUT -p tcp --dport 8443 -j ACCEPT
$ iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
$ service iptables save
$ service iptables restart

In case we want to route the access from port 80 to tomcats 8080

$ iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
$ iptables -t nat -I OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8080

2.) Download and setup tomcat 7

$ wget
$ tar -xvzf apache-tomcat-7.0.62.tar.gz
$ mv apache-tomcat-7.0.62 tomcat7
$ mv tomcat7/ /opt/

3.) Create a tomcat specific user and user group. Since the tomcat would be running from a script it should not be root user.

$ groupadd tomcat
$ useradd -g 99 -s /sbin/nologin -d /opt/tomcat7 tomcat
$ passwd tomcat
Adjust Ownership For New Users And Groups. Give the new user access to the tomcat directories. 
$ chown -R tomcat:tomcat /opt/tomcat7
$ chmod 775 /opt/tomcat7/webapps
$ chmod +x /opt/tomcat7/bin/*.sh

4.) Create a startup service script

$ vim /etc/init.d/tomcat
Add the following content to this script
# description: Tomcat Start Stop Restart
# processname: tomcat
# chkconfig: 234 20 80
export PATH

case $1 in
   cd $CATALINA_HOME/bin
   /bin/su -s /bin/bash tomcat ./
   cd $CATALINA_HOME/bin/
   /bin/su -s /bin/bash tomcat ./
   cd $CATALINA_HOME/bin/
   /bin/su -s /bin/bash tomcat ./
   cd $CATALINA_HOME/bin/
   /bin/su -s /bin/bash tomcat ./
exit 0

5.) Add the tomcat script as a service

$ chmod 755 /etc/init.d/tomcat
$ chkconfig --add tomcat
$ chkconfig --level 234 tomcat on
$ chkconfig --list tomcat

6.) Start/Stop the tomcat service

 $ service tomcat start
 $ service tomcat stop

SSL security with self-signed certificates on tomcat

In order to setup this tomcat on SSL Use the following configuration steps,

1.) Generate a keystore file for this server

This will be used as a self-signed certificate for secured connectivity. 
Default path: /home/%user.home%/.keystore
keytool -genkeypair -dname "CN=, OU=Rahul, O=Luhar, L=Vishwakarma, ST=Karnataka, C=IN" -alias mysslsecuredserver -keyalg RSA -ext san=ip:

2.) Add the relevant configuration to the tomcats https connector in conf/server.xml

 maxThreads="150" scheme="https" secure="true"
 clientAuth="false" sslProtocol="TLS" keystoreFile="${user.home}/.keystore" keystorePass="mypassman"/>

3.) Add the server IP to the truststore in order to allow for this self signed certificate

Use the to add the IP to the trusted store
Compile Run the following two commands to generate jssecacerts binary. is the web servers IP.
$ java InstallCert
Copy the generated jssecacerts in this path to %JAVA_HOME%\jre\lib\security

You can also export and import the generated certificate from the keystore with the password and share it with other systems on the network that negotiates with this server.

$ keytool -export -alias mysslsecuredserver -file mysslsecuredserver.cer
  $ keytool -import -trustcacerts -alias mysslsecuredserver -file mysslsecuredserver.cer

Verify the tomcat running and secured via HTTPS.

For a proper SSL shared from a hosting provider. Look at the import into the java cacerts

keytool -import -trustcacerts -file NewRootCACertificate.crt -keystore "%JAVA_HOME%\jre\lib\security\cacert"

Test Link:

AspectJ component for my services audit logger

This is a blog that will leverage the advantages of AspectJ aspect oriented programming concept in solving a very basic problem of auditing the visitors to your service and the response times. These parameters are very important when we want to do some operations around these.

Pre-requisite for AspectJ in your pom.xml

 <!-- Spring AspectJ -->

In your application context add the configuration as per the standard aop proxy. You should also include component-scan and annotation-driven.

<!-- Aspects -->
 <aop:aspectj-autoproxy proxy-target-class="true"/>

Here is the actual code that will intercept around the public calls  in controller. It will print the service url, method, parameters and arguments. We can also obtain the user from the session context.

 * @author Rahul Vishwakarma
 * This class will log all the service calls made to the Generic Application 
 * relying on the @Around advice
 * Ref: 
public class GenericLoggerAspect {
 /** Logger for this class and subclasses */
 private static final Logger log = LoggerFactory.getLogger(RhlLoggerAspect.class);
 public static ConcurrentHashMap<String,RequestData> responseTime = new ConcurrentHashMap<String,RequestData>();
 Utility utility;
 * Inner class for request info
 * @author Rahul
 public class RequestInfo{
 public int responseTimeMills = 0;
 public Date accessTime = null;
 public String urlPath;
 public String requestType;
 public String args;
 * Inner class for Data capturing request information
 * @author Rahul
 public class RequestData {
   public RequestInfo requestInfo;
   public String api;
   public UserInfo userInfo;
   public RequestData(RequestInfo requestInfo,UserInfo userInfo,String methodSignature){
   this.requestInfo = requestInfo;
   this.userInfo = userInfo;
   this.api = methodSignature;

 * User and client related info
 * @author Rahul
 public class UserInfo{
   public String userName;
   public int userId;
   public String sessionId;
   public String role;
   public String clientIp;
 private String getRepresentation(Object [] params){
   StringBuilder sb = new StringBuilder();
   String value = null;
   for(int i=0;i<params.length;i++){
   value = params[i] + ",";
     value = "";
    return sb.substring(0, sb.length() - 1);
  return sb.toString();
enum IssueType{
@Around("execution(@*..RequestMapping * * (..))")
public Object log_around(ProceedingJoinPoint pjp) throws Throwable {
  Object obj = null;
  IssueType issueType = IssueType.NONE;
  String error="Error in GenericLoggerAspect";
  String methodSignature = pjp.getSignature()+"";
  StringBuffer args = new StringBuffer(); //getRepresentation(pjp.getArgs());
  //Append arguments
  Object[] arg = pjp.getArgs();
  for (int i = 0; i < arg.length; i++) {
  if (arg.length > 0) {
   args.deleteCharAt(args.length() - 1);
  }"\tSTART {}-{}", methodSignature+" ["+Thread.currentThread().getId()+"]",args.toString());

  UserInfo userInfo = new UserInfo();
  UserSecure userSecure = utility.getUserInfo();
  if(userSecure != null){
   userInfo.clientIp = userSecure.getClientIp();
   userInfo.userId = userSecure.getUserId();
   userInfo.sessionId = userSecure.getSessionId();
   userInfo.userName = userSecure.getUserName();
  if(userSecure.getAuthorities() != null)
   userInfo.role = userSecure.getAuthorities().toString();
  ServletRequestAttributes sra = (ServletRequestAttributes)RequestContextHolder.getRequestAttributes();
  String urlPath=""; 
    HttpServletRequest req = sra.getRequest();
    urlPath = req.getServletPath();
 long start = System.currentTimeMillis();
   issueType = IssueType.ISSUE_URL_SUFFIX;
   obj = null;
 } else
   obj = pjp.proceed();
 String requestType = "ajax";
 if(obj!=null && (obj instanceof ModelAndView)){
   requestType = "page";
 int elapsedTime = (int) (System.currentTimeMillis() - start); 
 RequestInfo requestInfo = new RequestInfo();
 requestInfo.accessTime = (new Date());
 requestInfo.requestType = (requestType);
 requestInfo.responseTimeMills = (elapsedTime);
 requestInfo.urlPath = (urlPath);
 requestInfo.args = (args.toString());
 RequestData requestData = new RequestData(requestInfo, userInfo,methodSignature); + " REQ {} by "+userInfo.userName +"@"+userInfo.clientIp+", time {} mills; args ["+args.toString()+"]", urlPath + " ["+ Thread.currentThread().getId() +"]" , elapsedTime);
 responseTime.put(methodSignature, requestData);
 }catch(Exception ex){
 issueType = IssueType.OTHER;
 error = ex.getMessage();
 case ISSUE_URL_SUFFIX:throw new GenericException("URL ends with trailling /"); 
 case OTHER:throw new GenericException(error);
return obj;
 * @param exceptions
@AfterThrowing(pointcut="execution(public * com.generic.controller.*.*(..))",throwing="ex") 
 public void MethodError(Exception ex){ 
   log.error("@Exception {}", ex.toString()); 
@Pointcut("execution(public * *(..))")
private void anyPublicOperation() {"Testing the public Execution call");

When we run the system we expect the following response in the log which does the @Around joint point.


17-Jul-2014 03:25:35,067-INFO – GenericLoggerAspect:130 –       START List com.humesis.generic.controller.UserController.getUsers(HttpServletResponse)


17-Jul-2014 03:25:35,106-INFO – GenericLoggerAspect:176 – 674B8F114926E5A3BB143E7126D828C7 REQ /users [28] by rahul@0:0:0:0:0:0:0:1, time 36 mills; args [HttpSessionSecurityContextRepository]

This data can Asynchronously be audited or logged for generating the access pattern or hotspots in the service access.