Tuesday, September 25, 2012

Generating AS objects for AMF remoting

We know that flex/flash as a technology is something we wish to move away in preference for HTML5. While HTML5 itself doesn't seem to be quite there with what it promised, but it is definitely going strong. And with Adobe itself saying that HTML5 is the future, newer projects are not using Flex/Flash. There are however, a sizeable population of existing projects which use Flex and can benefit from using the nice tools already available. One of them is the GraniteDS. While we generally use BlazeDS as the remoting mechanism, when it comes to helping out the developers with cross-language quirks, GraniteDS is quite helpful. With our typical application stack, we have (and want to continue to have) the bulk of the business logic in the 3-tier JEE app with a Flex UI talking to via a remoting message broker with Spring thrown in to be able to access remote services nicely. The trouble comes in when we talk beyond primitives being passed back and forth. I am by no means a Flex expert, but it was a pain to hand-roll the AS classes that mirrored the Java classes that were need on the Flex UI for rendering purposes. Here is where GraniteDS with its maven codegen plugin stepped in. You can model this as a maven dependency for your UI module and have the packaged swc included. Let's work it out with an example:

Say you have a domain objects called SpecialObject and User and now you have to display the combination of these in a UI. You create a view object called UserSpecialObjectView which is crafted by your viewing service call. This needs to be passed over to the Flex side (without having to handroll it again in AS and without having to worry about changes to the dependent Java classes). We define our module which is supposed to create the AS view objects (called graniteds-tester-vo in our example). In the pom we reuse the configuration given in the documentation, and ask it to generate AS classes for all Java classes in the said package and it creates a nice swc that can be included in the UI module.

One of my colleages, had asked as to how the plugin handles customizations that may be needed for the generated classes. The plugin creates two classes for each class that it encounters - a base class and an extension. The base is something that is out-of-bounds for modifications (in target/generated-sources) and any changes to it are anyway overwritten with each build. The extension is what is available for customizations and with the configuration in the example, goes and sits in src/main/flex and should be checked in. The funny thing here is that the code in src/main/flex depends on target/generated-sources. If seen in Eclipse, this might show up as a warning/error, but the Maven command line is able to handle this correctly because maven recognizes that generation of sources is a valid phase in the build lifecycle. It is only because the M2Eclipse maven plugin in eclipse is unable to handle this is the reason why this shows up as an error/warning. However, if you do a build from command line once, the stuff shows up correctly in Eclipse also.

References:

Tuesday, September 18, 2012

Sudden Death

We often have scenarios where we need nimble restarts for our web applications. (The reason for the restarts can be quite creative - from stale file handles to it being a strategy to deal with leaky libraries). Let's take Tomcat as our container. In most cases, we rely on our catalina.sh stop force to bring our application to a halt and then we proceed to start. Internally, catalina.sh stop force, does a kill -9 on the $CATALINA_PID. However, the kill itself is not synchronous, even the kill -9 will halt if there is a kernel thread that is doing some operation. A snippet as suggested from the references which can work is something like
kill-9 $CATALINA_PID; while kill -0 $CATALINA_PID 2>/dev/null; do sleep 1; done

kill with the 0 signal seems to be a special signal which can be used to check if the process is up. It returns 0 (a good return code) if it is able to signal to the other process and returns 1 if it can't (either due to invalid PID or due to insufficient permissions). I have seen the while loop when using the regular stop in catalina.sh, however, when forced, it only does the kill -9 and sometimes it takes a non-trivial amount of time (say 10 secs) to completely halt. Was wondering if it would make sense for tomcat to include it in its stop force section also? Have logged a bugzilla ticket for it as an enhancement to Tomcat.

References:

Friday, September 14, 2012

Eclipse Debugger Step filtering

Ever noticed the T kind of an arrow in the Eclipse debug tab?


It is called "Use Step Filters" and seems a rather useful feature. Say you have code like:
            specialObjects.add(new SpecialClass1(new SpecialClass2(
                    new SpecialClass3(i, String.valueOf(i), Boolean
                            .valueOf(String.valueOf(i))))));

When you wish to debug to the constructor of SpecialClass2 using F5 (step into), Eclipse makes you jump through hoops taking you through a tour of Class Loaders, Security Managers, its internals, its grand-daddy's internals (sun.misc.* and com.sun.*) whereas what you really care about are your own classes. Enter Step Filters. Configure it via Windows-Preferences-Java-Debug-Step Filtering as shown and voila.


A "Step Into" operation skips classes from the packages listed out there. There are a few other options as well like filtering out getters/setters and static initializers, etc. which can also come in handy.

Happy Eclipsing!

References:

Haha Service!

What am I doing? I am mocking at a Service! :) Well, what I really wanted to do was write unit test case for one of my java services and mock its dependent services. So the title should be more along the lines of "Haha Dependent Services!". I will try to show mocking in action after looking at the sample tutorial from dzone.

Say I have the following classes:
public interface SpecialService {
    List<SpecialClass1> getSpecialObjects(String criterion);
}

public class SpecialServiceImpl implements SpecialService {
    private SpecialDAO specialDAO;
    public List<SpecialClass1> getSpecialObjects(String criterion) {
        return specialDAO.getSpecialObjects(criterion);
    }
    public void setSpecialDAO(SpecialDAO specialDAO) {
        this.specialDAO = specialDAO;
    }
}

The unit test case for this class using Mockito would look like:
@RunWith(MockitoJUnitRunner.class)
public class SpecialServiceUnitTest {
    @Mock
    private SpecialDAO mockSpecialDAO;
    @InjectMocks
    private SpecialService specialService = new SpecialServiceImpl();
    @Test
    public void testGetSpecialObjects() {
        List<SpecialClass1> testSpecialObjects = getTestSpecialObjects();
        Mockito.when(mockSpecialDAO.getSpecialObjects(Mockito.anyString()))
                .thenReturn(testSpecialObjects);

        String criterion = "mySpecialCriteria";
        List<SpecialClass1> obtainedSpecialObjects = specialService
                .getSpecialObjects(criterion);
        assertNotNull(obtainedSpecialObjects);
        assertEquals(obtainedSpecialObjects.size(), testSpecialObjects.size());
    }

    private List<SpecialClass1> getTestSpecialObjects() {
        List<SpecialClass1> specialObjects = new ArrayList<>();
        for (int i = 0; i < 100; i++) {
            specialObjects.add(new SpecialClass1());
        }
        return specialObjects;
    }
}

Let's disect the class:

  • MockitoJUnitRunner is the runner class (akin to our favorite SpringJUnit4ClassRunner) which will take care of processing the annotations of @Mock and @InjectMocks

  • @Mock identifes the dependent entity to be mocked. Here it is the SpecialDAO that needs to be mocked.

  • @InjectMocks is used to indicate the class that is being tested. This class is introspected for dependent entities that are being made available as mocks via the @Mock annotation. Note the instantiation of the implementation. Mockito uses this instance and tries using constructor injection, setter injection and property injection (in that order) to inject the mocks.

  • @Test is used to indicate to JUnit that this is a test method

  • In getTestSpecialObjects(), we prepare a list of special objects that we are going to have the mocked DAO return.

  • Mockito.when is invoked with the call to the mocked DAO. The DAO call expects a String, but since we want to return the same result for all inputs, we pass Mockito.anyString(). We then chain it up to the thenReturn clause which binds it to the list of special objects created earlier.

  • Now proceed with the calls as in the vanilla scenario and use regular asserts to ensure correctness. You may use Mockito.verifyto assert a bunch of other expectations.
I had packaged it as a maven project for others to play around after this - unfortunately couldn't upload it here as one can only upload images or other media, it seems. And I am too lazy to setup a github for it right now :)

Hope this helps!

References:

Thursday, September 13, 2012

Checking stdout and stderr for a process already running

Did you ever run into a scenario where you ran a job (either a perl process or a java process) and forgot to redirect its stdout and stderr to some files and now your term is also gone. After sometime, you think that the process is not working and you are not able to see the error messages being generated. Without really resorting to snooping techniques like ttysnoop or xwindow solutions, I came across a pretty neat way of intercepting those calls (tested in Linux)
strace -ewrite -p <pid>

The strace command traces all the system calls and the signals, and if you start grepping on the write calls. Turned out to be quite useful  in a crunch scenario, but one should always think about where the stdout and stderr should be redirected.

Hope this helps!

References:

Wednesday, September 12, 2012

Real size of objects

What does the size of an object really mean in Java? The usual answer of counting the flat or shallow size may not always be what we are looking for. Say, we are vending out a lot of data from our application to an external application and in the process if we happen to die with an OOME, it suddenly becomes relevant to know the size of the overall data we are vending out. Here is where I stumbled on the concept called "retained size".

From YourKit's site:
Shallow size of an object is the amount of memory allocated to store the object itself, not taking into account the referenced objects. Shallow size of a regular (non-array) object depends on the number and types of its fields. Shallow size of an array depends on the array length and the type of its elements (objects, primitive types). Shallow size of a set of objects represents the sum of shallow sizes of all objects in the set.

Retained size of an object is its shallow size plus the shallow sizes of the objects that are accessible, directly or indirectly, only from this object. In other words, the retained size represents the amount of memory that will be freed by the garbage collector when this object is collected.

The site gives a good pictorial representation that distinguishes the two. Now, YourKit is a commerical third party software. For us mere mortals, we will have to make do with JVisualVM. So I fired up my jvisualvm to perform a heap dump. Note that memory related operations using jvisualvm need to be done on the same host as the process and cannot be done remotely. Once the heap dump was done, take a deep breath and hit the "Find 20 biggest objects by retained size" button and leave for the day :)

If you are lucky, jvisualvm would have been done computing the retained size for the heap dump by the time you are back the next day. I was trying this out for a heap of size 1.3G and it didn't complete even after consuming ~75 hours of CPU time. Similar complaints are heard in forums.

Next stop: YourKit Java Profiler 11.0.8. I got an evaluation licence valid for 15 days and proceeded to download the profiler tar (~68 MB) for Linux. I loaded the snapshot taken from jvisualvm into this and in a few seconds, the snapshot loading action was done. There was a question on CompressedOops in the 64-bit JVM which was best explained in detail here.

The default view itself shows the number of objects, the shallow size and the retained size upfront. It uses some heuristics to guess these sizes and you have the option of getting the exact values by hitting another button which refines them to the more accurate values. Interestingly, these differed only minutely.



Right click on one of the culprits and you know all the retained instances. It has a few other inspections that can be done. E.g.: Duplicate strings caused by careless use of new String(), duplicate arrays, sparse arrays, zero length arrays and other oddities along with a brief explanation.

Overall the responsiveness seemed much better than jvisualvm. If anyone has used jprofiler, you can share if such a thing exists there and how intuitive/responsive it is.

Now, over to MAT from Eclipse. This is a free tool, which is purportedly as fast as YourKit. So I dowloaded the Linux archive (~46 MB) and fired it up for the heap generated earlier.






Again, in a couple of minutes, MAT was able to create an overview which gave the biggest objects by retained sizes.




References:

Tuesday, September 11, 2012

Maven release plugin on a multi-module project having different releasecycles and hence versions

With Git migration headed in our direction, we were testing the readiness of our utility scripts for this new SCM. The maven-release-plugin had advertised that it supports git as a SCM and we were hoping that it would be straight-forward.

The project structure was like

  • parent


    • child1

    • child2

    • child3

We modified the parent pom file of parent as
    <scm>
        <connection>scm:git:ssh://gitserve/test/parent.git</connection>
        <developerConnection>scm:git:ssh://gitserve/test/parent.git</developerConnection>
        <url>http://gitweb/?p=test/parent.git;a=summary</url>
    </scm>

FTR, the connection tag is the read link and the developerConnection is the read-write link. This is supported for cases where one may allow the reads and writes to happen via different transports - say HTTP for reads and only SSH for writes.

Also, the maven-release-plugin v 2.3.2 fixes some significant bugs, so make sure that you use this version of the plugin. Best set it up via pluginManagement.

We follow the standard pattern of moving changes from trunk -> branch -> tag in the SVN world.
With the move to Git, the tag is a real tag and is really a read-only point in the history of the project and hence no need to have commit hooks that were added to support the notion on tags in the SVN world. Other open source projects like Spring, Sonatype have adopted a similar approach.

Now comes the interesting part of doing a minor release for one of the sub-modules - say child2. With the regular invocation of the maven release:branch, it was complaining of an invalid git location as
ssh://gitserve/test/parent.git/child2

Clearly, this was not a valid git location. I finally stumbled on this SO question which discusses that this is actually some intelligence on the part of the maven release plugin when it encounters multiple projects. So, the workaround? We just need to redefine the scm tags in the individual sub-modules again and voila - you are done!

Yes, this was a long winding rant, but hopefully someone will benefit from it!

Here we go...

I had been thinking of collating all the random little techy stuff that we encounter day over day. It would serve two causes - for one it might give me a place to store all the useful links that I forget in the course of days to come and it might also help others gain from problems being solved. A third advantage is that peers can suggest alternate/more elegant ways of doing things.