AspectJ component for my services audit logger

This is a blog that will leverage the advantages of AspectJ aspect oriented programming concept in solving a very basic problem of auditing the visitors to your service and the response times. These parameters are very important when we want to do some operations around these.

Pre-requisite for AspectJ in your pom.xml

 <!-- Spring AspectJ -->
 <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-aspects</artifactId>
    <version>${org.springframework-version}</version>
    <scope>compile</scope>
 </dependency>
 <dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjweaver</artifactId>
    <version>${org.aspectj-version}</version>
    <scope>compile</scope>
 </dependency>
 <dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjrt</artifactId>
    <version>${org.aspectj-version}</version>
 </dependency>
 <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-aop</artifactId>
    <version>${org.springframework-version}</version>
 </dependency>
 <dependency>
    <groupId>cglib</groupId>
    <artifactId>cglib</artifactId>
    <version>3.1</version>
 </dependency>

In your application context add the configuration as per the standard aop proxy. You should also include component-scan and annotation-driven.

<!-- Aspects -->
 <aop:aspectj-autoproxy proxy-target-class="true"/>

Here is the actual code that will intercept around the public calls  in controller. It will print the service url, method, parameters and arguments. We can also obtain the user from the session context.

/**
 * @author Rahul Vishwakarma
 * This class will log all the service calls made to the Generic Application 
 * relying on the @Around advice
 * Ref: http://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies 
 */
@Component
@Aspect
public class GenericLoggerAspect {
 /** Logger for this class and subclasses */
 private static final Logger log = LoggerFactory.getLogger(RhlLoggerAspect.class);
 
 public static ConcurrentHashMap<String,RequestData> responseTime = new ConcurrentHashMap<String,RequestData>();
 
 @Autowired 
 Utility utility;
/**
 * Inner class for request info
 * @author Rahul
 *
 */
 public class RequestInfo{
 public int responseTimeMills = 0;
 public Date accessTime = null;
 public String urlPath;
 public String requestType;
 public String args;
 }
 /**
 * Inner class for Data capturing request information
 * @author Rahul
 *
 */
 public class RequestData {
   public RequestInfo requestInfo;
   public String api;
   public UserInfo userInfo;
 
   public RequestData(RequestInfo requestInfo,UserInfo userInfo,String methodSignature){
   this.requestInfo = requestInfo;
   this.userInfo = userInfo;
   this.api = methodSignature;
   }
 }

/**
 * User and client related info
 * @author Rahul
 *
 */
 public class UserInfo{
   public String userName;
   public int userId;
   public String sessionId;
   public String role;
   public String clientIp;
 }
 
 private String getRepresentation(Object [] params){
   StringBuilder sb = new StringBuilder();
   if(params!=null)
   {
   String value = null;
   for(int i=0;i<params.length;i++){
   value = params[i] + ",";
   if(value.contains("@")){
     value = "";
   }
     sb.append(value);
   }
    if(sb.length()>0)
    return sb.substring(0, sb.length() - 1);
  }
  return sb.toString();
}
enum IssueType{
  ISSUE_URL_SUFFIX, OTHER, NONE
}
@Around("execution(@*..RequestMapping * * (..))")
public Object log_around(ProceedingJoinPoint pjp) throws Throwable {
 
  Object obj = null;
  IssueType issueType = IssueType.NONE;
  String error="Error in GenericLoggerAspect";
 try{
  String methodSignature = pjp.getSignature()+"";
  StringBuffer args = new StringBuffer(); //getRepresentation(pjp.getArgs());
 
  //Append arguments
  Object[] arg = pjp.getArgs();
  for (int i = 0; i < arg.length; i++) {
    args.append(arg[i]).append(",");
  }
  if (arg.length > 0) {
   args.deleteCharAt(args.length() - 1);
  }
 
  log.info("\tSTART {}-{}", methodSignature+" ["+Thread.currentThread().getId()+"]",args.toString());

  UserInfo userInfo = new UserInfo();
  UserSecure userSecure = utility.getUserInfo();
  if(userSecure != null){
   userInfo.clientIp = userSecure.getClientIp();
   userInfo.userId = userSecure.getUserId();
   userInfo.sessionId = userSecure.getSessionId();
   userInfo.userName = userSecure.getUserName();
  if(userSecure.getAuthorities() != null)
   userInfo.role = userSecure.getAuthorities().toString();
  }
  ServletRequestAttributes sra = (ServletRequestAttributes)RequestContextHolder.getRequestAttributes();
 
  String urlPath=""; 
 
  if(sra!=null){
    HttpServletRequest req = sra.getRequest();
  if(req!=null){
    urlPath = req.getServletPath();
  } 
 }
 
 long start = System.currentTimeMillis();
 if(urlPath.endsWith("/")){
   issueType = IssueType.ISSUE_URL_SUFFIX;
   obj = null;
 } else
   obj = pjp.proceed();
 
 String requestType = "ajax";
 if(obj!=null && (obj instanceof ModelAndView)){
   requestType = "page";
 }
 int elapsedTime = (int) (System.currentTimeMillis() - start); 
 RequestInfo requestInfo = new RequestInfo();
 requestInfo.accessTime = (new Date());
 requestInfo.requestType = (requestType);
 requestInfo.responseTimeMills = (elapsedTime);
 requestInfo.urlPath = (urlPath);
 requestInfo.args = (args.toString());
 
 RequestData requestData = new RequestData(requestInfo, userInfo,methodSignature);
 log.info(userInfo.sessionId + " REQ {} by "+userInfo.userName +"@"+userInfo.clientIp+", time {} mills; args ["+args.toString()+"]", urlPath + " ["+ Thread.currentThread().getId() +"]" , elapsedTime);
 responseTime.put(methodSignature, requestData);
 //addUserAuditLog(requestData);
 
 }catch(Exception ex){
 issueType = IssueType.OTHER;
 error = ex.getMessage();
 log.error(ex.getMessage());
}
 
if(obj==null){
 switch(issueType){
 case ISSUE_URL_SUFFIX:throw new GenericException("URL ends with trailling /"); 
 case OTHER:throw new GenericException(error);
 default:
 break;
 } 
}
return obj;
}
/**
 * @param exceptions
 */
@AfterThrowing(pointcut="execution(public * com.generic.controller.*.*(..))",throwing="ex") 
 public void MethodError(Exception ex){ 
   log.error("@Exception {}", ex.toString()); 
 } 
 
@Pointcut("execution(public * *(..))")
private void anyPublicOperation() {
   log.info("Testing the public Execution call");
 }
}

When we run the system we expect the following response in the log which does the @Around joint point.

Before:

17-Jul-2014 03:25:35,067-INFO – GenericLoggerAspect:130 –       START List com.humesis.generic.controller.UserController.getUsers(HttpServletResponse)

After:

17-Jul-2014 03:25:35,106-INFO – GenericLoggerAspect:176 – 674B8F114926E5A3BB143E7126D828C7 REQ /users [28] by rahul@0:0:0:0:0:0:0:1, time 36 mills; args [HttpSessionSecurityContextRepository]

This data can Asynchronously be audited or logged for generating the access pattern or hotspots in the service access.

 

Hibernate JPA project to create entities from tables

 

Hibernate JPA 

Step 1: Install Hibernate tools in your Eclipse JEE IDE

Step 2: Create a JPA Project. JPA Facet should have library configuration disabled and a database connection set as we will add hibernate libraries in the classpath.

Image

Step 3: Add the hibernate JPA libraries in build path

Image

 

Step 3: Add a connection to the database with tables available

Step 4: In JPA perspective right click the project -> JPA Tools -> Generate entities from table

 

 

Maven archetype project creation through commandline

When we are creating a new web application project it is mandatory that we follow some guidelines to the project structure. Maven archetype generation flow simplifies this by providing a command line interface to generate a project structure that can be imported by Eclipse.

Commands to execute the project generation workflow

#Step 1.

Go through the command line workflow to create an archetype-webapp project with filter id 362

> mvn archetype:generate

Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): 359: 362

Choose a number: 5: 5

Define value for property ‘groupId’: : MYTrialProject

Define value for property ‘artifactId’: : MYTrialProject

Define value for property ‘version’: 1.0-SNAPSHOT: :

Define value for property ‘package’: CTFWeb: : com.test.mytrial

Confirm properties configuration: Yes

#Step 2 : Create the relevant folder structure within the src folder of project by going to windows explorer or the relevant CLI

Image

 

#Step 3: Eclipse IDE import

Import the project using Import > Existing Maven Projects

Migrating from Hibernate-EHCache in-memory resource hell towards optimistic AWS ElastiCache or Memcached

Taking my discussion further away from Hibernate-EHcahe and their terracotta servers.

The in-memory implementation are for the faint hearted. The kind of load we unexpected, brought us to think of a better way. The implementation of hibernate-memcached looks fine but I am not very convinced.

So, what do I do ?  I take it out for a trial run.

Lets migrate the same application to offload the 2nd level cache to the ElastiCache/Memcached provisioned by AWS.

Step 1: Setup a Memcached local instance or Create an Amazon ElastiCache Node in AWS Public Cloud

Note: In AWS be sure to add the ElastiCache server in the same security group as the EC2 instance used for deployment.

Step 2: Update Maven POM

 <!-- Memcache Hibernate integration -->
 <dependency>
    <groupId>com.googlecode</groupId>
    <artifactId>hibernate-memcached</artifactId>
    <version>1.2</version>
 </dependency>
 <repository>
    <id>hibernate-memcached</id>
    <name>hibernate-memcached</name>
    <url>http://raykrueger.googlecode.com/svn/repository</url>
 </repository>

Step 3: Update the Existing hibernate properties in the session-factory configurations

 <!-- AWS ElasiCache/Memcached Config -->
 <prop key="hibernate.cache.provider_class">com.googlecode.hibernate.memcached.MemcachedCacheProvider</prop>
 <prop key="hibernate.cache.use_query_cache">true</prop>
 <prop key="hibernate.cache.use_second_level_cache">true</prop>
 <prop key="hibernate.memcached.servers">mycache.whatever.cache.amazonaws.com:11211</prop>
 <prop key="hibernate.memcached.cacheTimeSeconds">300</prop>
 <prop key="hibernate.memcached.connectionFactory">DefaultConnectionFactory</prop>
 <prop key="hibernate.memcached.clearSupported">false</prop>

Issues: I still have those Cache annotations with the concurrency configuration which is not good with Memcached as mentioned here. I did get hopeful after an informative read which discussed on deploying hibernate-memcached at an enterprise scale.

Also, the cache timeout are small as i get the object expired exceptions frequently. Need to look into this.

WARN : org.hibernate.cache.ReadWriteCache - An item was expired by the cache while it was locked (increase your cache timeout)

Inference: Still not convinced, as i require more freedom to configure just like Ehcache. The benefits of limiting the memory resources in a pool configuration can not be neglected as I am unsure at this point on the overflow consequences in ElastiCache servers. Ehcache clearly configures as per the specified limit and even accepts percentages for that matter in each domain cache configuration.

However, scalability can be an issue but in terms of 2nd level caching in hibernate we should really be specific  as the data is structured here. Only after analyzing the memory footprint and load we should configure the cache parameters. We can scale mindlessly on objects using memcached on other layers say; static web pages, intermediate results, database internal cache distribution etc. Only if in-memory is really a blocker considering your load, should you move to memcached as ElastiCache service. In this scenario you need to dig into the internals of hibernate-memcached to verify your production usage needs. I am in the process of doing so right now !!!

Spring Hibernate-3 2nd Level EHCache 2.8 Configurations with C3P0 Pooling

Here I am discussing the configurations required to enable a 2nd Level caching using EHCache in a Spring Hibernate 3 ORM setup.

Pre-Requisites: You should have a working hibernate project in order to migrate to EHCache with in-memory and local disk type of caching configuration. Be aware that this implementation uses RAM to maintain the cached collections and should not be handy in case of a large dynamic load. You should move to distributed cache using a Terracotta server arrays or a totally different implementation using the AWS Elastic Cache.

Maven Dependencies:

 <dependency>
 <groupId>org.hibernate</groupId>
 <artifactId>hibernate-c3p0</artifactId>
 <version>${org.hibernate-version}</version>
 </dependency>

 <!-- EHCache Dependencies -->
 <dependency>
 <groupId>net.sf.ehcache</groupId>
 <artifactId>ehcache</artifactId>
 <version>2.8.0</version>
 </dependency>

 <dependency>
 <groupId>org.hibernate</groupId>
 <artifactId>hibernate-ehcache</artifactId>
 <version>${org.hibernate-version}</version>
 </dependency>

Step 1:  Make sure your hibernate session factory configuration looks like the following,

<bean id="c3p0DataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
 <property name="driverClass" value="${jdbc.driverClassName}" />
 <property name="jdbcUrl" value="${jdbc.url}" />
 <property name="user" value="${jdbc.username}" />
 <property name="password" value="${jdbc.password}" />
 <property name="minPoolSize" value="2" />
 <property name="maxPoolSize" value="50" />
 <property name="maxIdleTime" value="319" />
 </bean>
<bean id="mySessionFactory"
 class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
 <property name="dataSource" ref="c3p0DataSource" />
 <property name="annotatedClasses">
 <list>
 <value>com.xyz.abc.MyBean1</value>
 <value>com.xyz.abc.MyBean2</value>
 </list>
 </property>
 <property name="hibernateProperties">
 <props>
 <prop key="hibernate.dialect">${hibernate.dialect}</prop>
 <prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
 <prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop>
 <prop key="hibernate.connection.useUnicode">true</prop>
 <prop key="hibernate.connection.characterEncoding">UTF-8</prop>
 <!-- EHCache Configuration -->
 <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop>
 <prop key="hibernate.cache.use_second_level_cache">true</prop>
 <prop key="hibernate.cache.use_query_cache">true</prop>
 <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory</prop>
 </props>
 </property>
 </bean>
 <bean id="transactionManager"
 class="org.springframework.orm.hibernate3.HibernateTransactionManager">
 <property name="sessionFactory" ref="mySessionFactory" />
 </bean>
<context:annotation-config />
<tx:annotation-driven />

From the above config you can be sure that the two domain objects that are being configured with the session factory are using a pooled connection from c3po and have second level cache and query cache enabled. We also observe that the transactions are annotation driven so we dont have to worry about manually doing a begin and commit on any of them.

Step 2: EHCache Configuration file in class path <ehcache.xml>

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
    maxBytesLocalHeap="100M"
    maxBytesLocalDisk="2G">

 <sizeOfPolicy maxDepth="100000" maxDepthExceededBehavior="abort"/>

 <defaultCache eternal="false" 
    timeToIdleSeconds="120" 
    timeToLiveSeconds="120"
    memoryStoreEvictionPolicy="LRU" 
    statistics="true">
    <persistence strategy="localTempSwap"/>
 </defaultCache>

 <cache name="org.hibernate.cache.StandardQueryCache"
    eternal="false" 
    timeToIdleSeconds="120"
    timeToLiveSeconds="120">
    <persistence strategy="localTempSwap"/>
 </cache>
</ehcache>

This configuration file is looked up by the cache configuration reader. We have enabled default cache for all the beans and a separate cache for the Queries. Both of these cache will share the resources pooled in the ehcache attributes. Example local heam of 100 MB will be shared equally by all cache.

Step 3: Annotate the Domains with the relevant caching annotation

@Entity
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
@Table(name = "MyTable1")
public class MyBean1 {

...

Step 4: Make sure you have annotation config in you app context

<context:component-scan base-package="com.xyz.abc, com.xyz.def"/>
<context:annotation-config/>

Step 5: Finally, Make sure your DAO is setting the cache store. This can be set to true in the query creation of session call. Find the relevant examples below.

public Collection<T> getAll() throws DataAccessException {
 return _sessionFactory.getCurrentSession()
 .createCriteria(this._persistentClass).setCacheable(true).list();
 }
public Collection<T> findBySQLQuery(String sqlQueryString)
 throws DataAccessException {
 return _sessionFactory.getCurrentSession()
 .createSQLQuery(sqlQueryString).setCacheable(true).list();
 }
public Collection<T> findByHSQLQuery(String hsqlQueryString,
 Map<String, Object> map) {
 return _sessionFactory.getCurrentSession().createQuery(hsqlQueryString)
 .setCacheable(true).setProperties(map).list();
}
public Collection<T> findByHSQLNamedQuery(String namedQuery,
 Map<String, Object> map) {
 return _sessionFactory.getCurrentSession().getNamedQuery(namedQuery)
 .setCacheable(true).setProperties(map).list();
 }

This should be helpful in terms of code required to wire up the caching in the project.

Test: A way to test is by switching on the hibernate logging and making a get all query call multiple times. The first time the query should make a sql query connect to the db and respond while the other times it should just return from the local cache without this query to db.

Configure the following in log4j.xml

<category name="org.hibernate" additivity="false">
 <priority value="INFO" />
 <appender-ref ref="console" />
 </category>