Tuesday, September 27, 2011

SOAP Message validation in WCF

Well, after a bit more research it seems that Microsoft does support validating outgoing messages. It still doesn’t resolve the issues in my post (http://kenneth.gotcheese.co.za/post/Microsoft-cannot-stop-being-a-rebel.aspx) but does answer where Microsoft feels any validation should be done regarding SOAP. On one hand I agree with them but on the other I disagree. They allow for the INotify property if you enable data binding but they can’t enable the max length on field? Strange I know but I suppose they have their reasons!

 

Anyways, you are able to validate the outgoing message against the declaring XSD. Again this sounds fine but has a few issues, namely those around exposing the XSDs for public consumption. I suppose you could get round this via HTTP authentication but that means you have to push authentication backwards and forwards. The other option is to distribute the XSDs with the client proxies. I see that the Service Reference created in Visual Studio 2010 does exactly this (after rewriting your XSDs for you Smile).

 

To perform the validation you need to first create a client message inspector that implements the IClientMessageInspector.

 

Something like this:

public class MessageInspector : IClientMessageInspector


 



Once you have done that you need to implement the relevant methods (those that the interface declares).



 



The method we are going to focus on is the BeforeSendRequest method. In this method we will perform the validation using the XSD for the message. What you will need to do is check that the Message object is not a fault. If it is a fault return at this point. Next you want to get the Body of the Envelope. This can be achieved by calling GetReaderAtBodyContents()



var bodyReader = message.GetReaderAtBodyContents();


 



Next you going to need to get the XSD for your message. This might prove tricky if you have not got a naming convention that can be used to derive the name of the XSD. A way around this might be to load up the locations of the XSDs into a dictionary (perhaps even load all the XSD Schemas into that list so you can look it up via targetNamespace) but that I will leave to your imagination.



 



Once you have the body contents you open up the XSD file, load it into an XmlReader and read the XML document. If any errors occur, the callback method attached to the XmlReaderSettings.ValidationErrorHandler will be called.



 



First we configure the XmlReaderSettings:



var xsdPath = "pathtoyourfile.xsd;

using (var stream = File.OpenRead(xsdPath)) {

var schemaReader = XmlReader.Create(stream);

var readerSettings = new XmlReaderSettings
{
CloseInput = true,
Schemas = new XmlSchemaSet(),
ValidationFlags = XmlSchemaValidationFlags.None,
ValidationType = ValidationType.Schema
};


readerSettings.Schemas.Add("http://yournamespacehere.com", schemaReader);
readerSettings.Schemas.Compile();

//Attach to error event handler
readerSettings.ValidationEventHandler += new ValidationEventHandler(InspectionValidationHandler);
}


 



Then to validate the document you create an instance of a reader, attach the reader settings and read the document like so:



 



var wrappedReader = XmlReader.Create(bodyReader, readerSettings);

var startDepth = wrappedReader.Depth;

while (wrappedReader.Read())
{
if (wrappedReader.Depth == startDepth && wrappedReader.NodeType == XmlNodeType.EndElement)
{
break;
}
}


If there are any errors in the document they will be raised while reading and pushed to the callback handler.



 



For more information one solving this problem check out the references I found while solving it:



 



Microsoft cannot stop being a rebel.

Well they have done it once again. Microsoft has never been know to conform to what the world of software engineering classes as best practises. They are also known not to confirm to widely published standards.

 

This is true in their Internet Explorer browser (although with the new versions they seem to be getting there) and other products. Yes I know that is how they make money but it is also the way they are losing a great deal of potential customers.

 

I was investigating the SvcUtil tool earlier and was asked to figure out how to enforce the declarations in the XSD limiting the length of string values. Now it would make sense for a client to support this right? Well not according to Microsoft. The XSD.exe utility also doesn’t support doing this.

 

The ONLY reason I can figure Microsoft didn’t do this is because they believe that the service should truncate and enforce the maximum lengths of the strings being supplied. While this might have you nodding your head going “ah yes, well then they have a valid point” my next question to you is, why is that valid?

 

Sure the contract needs to be enforced on the server side, that is a given. If it is not enforced on the server side it is not really a contract is it? However, does this mean that we should allow huge sets of string data to be transmitted if the first 250 characters are going to be consumed? I don’t think that is viable as you are polluting a call that was probably designed to be as efficient as possible. Still not convinced?

 

Well let me throw it back to the consumer or client. If you are generating a client for a web service and you have not explicitly checked the XSD will you be aware of the restrictions? Well you won’t if you haven’t checked it. Then next thing is this. If the validation of those maximum lengths has fallen to the client to verify, how are you going to do that? How are you going to know that string you are submitting must only be a length of 200 characters? The only way you can do this is to verify that the restriction exists inside the XSD and then implement some sort of check on the field with a message letting you if the max length is exceeded.

 

While this might seem like a viable solution, I am incline to disagree. There is a significant amount of work attached to doing this and if the contract should change for what ever reason, you will have to go back and find all the places you have implemented this.

 

I will be contacting Microsoft with regard to this and try and figure out their thinking behind it and if there is a road map to fix it. Until then, if you have any ideas or suggestions please let me know.

Sunday, September 25, 2011

Airsoft AK 47 MS

YAY! I finally got myself an airsoft rifle care of the folks at http://www.kreature.co.za. Full metal so it weights roughly 3kgs which is really neat (don’t really enjoy the plastic ones that feel like I am carrying a water pistol). My initial impression is that it is pretty neat. It doesn’t quiet have the resonance our presence as the real one (but you can’t go around discharging AK 47s randomly unless you are in some North-African countries) but it seems to work cool.

 

Lets have a look at a couple pictures shall we?

 

First up the rifle with the two point sling mounted and the stock extended

IMG00081-20110925-1556

 

Pretty neat huh? Next we take a look at it with the stock folded (still with the two point sling)

IMG00083-20110925-1558

 

Right my next challenge was charging the battery. I found a really cool site that describes the formula for charging the battery. The formula goes like this:

 

(battery capacity (battery's mAh rating)/charger output (mA - usually written on the charger)) x 1.4(for NiCad batteries, 1.5 for NiMh batteries) = time (in hours).

You can view the rest of the discussion here : http://answers.yahoo.com/question/index?qid=20091013093650AAcTEBp

 

So I worked mine out and it needed 4 hours according to the formula. So I charged it for four hours and went outside to have some fun. The battery died in 5 minutes Sad smile

 

So I remembered that some batteries require a longer initial charge. So I gave the guys at Kreature a call and asked what the story was. Seems the Ni-MH type batteries require and initial charge of 8-12 hours! Ok well at least there is nothing wrong with my battery. So now it is plugged in again and this time I will leave it for 10 hours.

 

Anyways, still very excited to finally have it (been trying to get one for over a year now!) so waiting a few more hours ain’t going to kill me. Now it is time to start finding some games!

Thursday, September 22, 2011

JavaScript Hashmap and MVC 3

I was fiddling with an idea that allowed rows to be dynamically added to an html page and deleted off the page. This became a bit tricky because I couldn’t identify the row I wanted to get rid of.

 

Eventually what I ended up doing was maintaining a list of the rows in a JavaScript object that functioned the same as the hash map and as opposed to deleting one row at a time I would remove the entire list from the page and re render it. The reason for this is when submitting arrays to an MVC 3 controller based on a strongly typed model you have to name the hidden input fields sequentially. Something like this:

 

<input type="hidden" id="EventList_0_SomeId" name="EventList[0].SomeId"  value="myid" />
<input type="hidden" id="EventList_0_Capacity" name="EventList[0].Capacity" value="25" />


 



As you probably gathered the next one would increment the 0 in the id and the 0 in the name to 1. The next one 2 and so on and so forth.



 



<input type="hidden" id="EventList_1_SomeId" name="EventList[1].SomeId"  value="myid" />
<input type="hidden" id="EventList_1_Capacity" name="EventList[1].Capacity" value="25" />

<input type="hidden" id="EventList_2_SomeId" name="EventList[2].SomeId" value="myid" />
<input type="hidden" id="EventList_2_Capacity" name="EventList[2].Capacity" value="25" />

<input type="hidden" id="EventList_3_SomeId" name="EventList[3].SomeId" value="myid" />
<input type="hidden" id="EventList_3_Capacity" name="EventList[3].Capacity" value="25" />


Just as a pointer, the name and the id of the input have to be declared or the MVC 3 controller will not resolve the values. The above example is a model that contains a list of objects that contain a property called SomeId and Capacity. If you do it the way I have illustrated above, it will resolve into a nice object representation in the controller that you can manipulate.



 



The Hashmap declaration:



function Map()
{
// members
this.keyArray = new Array(); // Keys
this.valArray = new Array(); // Values

// methods
this.put = put;
this.get = get;
this.size = size;
this.clear = clear;
this.keySet = keySet;
this.valSet = valSet;
this.showMe = showMe; // returns a string with all keys and values in map.
this.findIt = findIt;
this.remove = remove;
}

function put( key, val )
{
var elementIndex = this.findIt( key );

if( elementIndex == (-1) )
{
this.keyArray.push( key );
this.valArray.push( val );
}
else
{
this.valArray[ elementIndex ] = val;
}
}

function get( key )
{
var result = null;
var elementIndex = this.findIt( key );

if( elementIndex != (-1) )
{
result = this.valArray[ elementIndex ];
}

return result;
}

function remove( key )
{
var result = null;
var elementIndex = this.findIt( key );

if( elementIndex != (-1) )
{
this.keyArray = this.keyArray.removeAt(elementIndex);
this.valArray = this.valArray.removeAt(elementIndex);
}

return ;
}

function size()
{
return (this.keyArray.length);
}

function clear()
{
for( var i = 0; i < this.keyArray.length; i++ )
{
this.keyArray.pop(); this.valArray.pop();
}
}

function keySet()
{
return (this.keyArray);
}

function valSet()
{
return (this.valArray);
}

function showMe()
{
var result = "";

for( var i = 0; i < this.keyArray.length; i++ )
{
result += "Key: " + this.keyArray[ i ] + "tValues: " + this.valArray[ i ] + "n";
}
return result;
}

function findIt( key )
{
var result = (-1);

for( var i = 0; i < this.keyArray.length; i++ )
{
if( this.keyArray[ i ] == key )
{
result = i;
break;
}
}
return result;
}

function removeAt( index )
{
var part1 = this.slice( 0, index);
var part2 = this.slice( index+1 );

return( part1.concat( part2 ) );
}
Array.prototype.removeAt = removeAt;


 



The usage is just as simple. Include the JavaScript file and then:



var map = new Map();

map.put("key", value);
map.remove("key");

//etc


 



A really nice feature is that it does not duplicate keys but performs an “update” on the object at that key. So if you want to retrieve all the keys you can do something like this:



 



for (var i = 0; i < hashMap.keyArray.length; i++) {
var value = map.valArray[i];
var key = map.keyArray[i];
console.log(key, value.toSource());
}


 



 



I found the Hashmap declaration over here http://ping.fm/YAKIX



 



Some other interesting tid bits on the MVC embedded arrays, lists and editors for:



Wednesday, September 21, 2011

Object Relation Mappers (ORM) vs Stored Procedures

Recently I was tasked with doing some investigations as to the best route to go. Now before you go getting all excited I am not going to be posting performance comparisons or declaring an outright winner. What I am going to point out is how to make the decision based on other factors.

 

As I was looking for feed back on the respective technologies it became very clear that this is a holy war that no one can win due to the emotional attachment to our egos and having to be right and the lack of really clear distinctions between the two.

 

First lets look at some basic best practise in writing maintainable software:

  1. Make code readable
  2. Use automated testing
  3. Use version control
  4. Ensure software is well designed
  5. Use less code
  6. Encapsulate
  7. DRY – do not repeat yourself
  8. Loose coupling
  9. Write unit tests

 

This is the essence of what I feel the articles in the reference section encapsulate. The primary reason for writing maintainable code (asides from having to maintain it) is to facilitate change. Businesses are becoming more dynamic and cannot afford to wait for months or years for the implementation of a vision they had. First to market is more important than ever with smaller businesses finding it easier to compete due to software and the internet.

 

Now the generally preferred structure of a software application is view layer, business logic layer and data layer. If designed properly one can very easily attach multiple views for different platforms to the solution without having to reengineer the business logic. The data stores can also be swapped out with relative ease or perhaps extended to include other data stores.

 

So what is a stored procedure? According to wikipedia: “A stored procedure is a subroutine available to applications accessing a relational database system. Stored procedures (sometimes called a proc, sproc, StoPro, StoredProc, or SP) are actually stored in the database data dictionary.”

Now the benefits claimed with using stored procedures have always been related to performance. It is a common belief that stored procedures run quicker than generated SQL. While this might be the case with an experienced writer, I have had the distinct displeasure of seeing it go horribly wrong as well. This does not mean that I have not seen it happen in code but generally it is easier to fix the code than the stored procedure due to the unit tests. When changing a stored procedure inevitably you are going to have to change code. When changing DB structure you will have to change all the procedures that use that dataset and the code that maps to it.

 

Now let us get out of the emotional stuff and start comparing apples with apples.

 

If we have a look at the the description above on how to write maintainable code let see how stored procedures match up.

 

  1. Make code readable – Well no, it is a structured query language. While it looks like bad English sometimes it can be difficult to read.
  2. Use automated testing – I haven’t seen a way to automate the testing of stored procedures
  3. Use version control – I have not seen a way to handle versioning of stored procedures with ease
  4. Ensure software is well designed – Being procedural in nature there is very little design that can happen
  5. Use less code – There are certain things you can do in code you can’t do in SQL. So you might end up having to write far more SQL to facilitate it.
  6. Encapsulate – While some might argue that the procedure is encapsulated in the database I would argue that the logic is not encapsulated where it should be.
  7. DRY – do not repeat yourself – With having to name tables and operations continually there is a great deal of replication happening
  8. Loose coupling – Can stored procedures be interchanged between database vendors? Well yes, if you haven’t used vendor specific functions. It is also tightly coupled to the database unfortunately.
  9. Write unit tests – I would if I could! Haven’t seen this in Stored procedures.

 

I am not going to run the code through the same assessment as we all know that code supports all the above. Right lets get into the next point. While stored procedures might perform better, does the saving from the performance increase compliment the additional cost of maintenance attached by using stored procedures? The next question we need to ask is this. How safe is it to have business logic reside inside the database as opposed to the code base? What the you had specific rules for the same entities in a database? You would have to replicate the initial procedure and fine tune it for each entity. Now should the shared logic change you have multiple places you need to go and change. Not good!

 

Lets look at it from the other side. Yes generating SQL to query a database has a certain amount of overhead. That is the only concern that people have. Let me say that again, the only con that using code over using stored procedures has is the performance aspect. So what do we do? Well lets have a look at another definition: “"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.”

Is this not what we are doing when we allow the decision to use stored procedures affect our system designs? How about we try this from now on. Lets write the application first, get it working properly (even if it is a single featureSmile) and release it. Once we identify bottle necks we begin to optimise the bottle necks. This might very well include using stored procedures! Lets get out of the dark ages folks. There is no right or wrong in this realm. There is only deliver on time or don’t. Lets deliver on time Winking smile Perfection is generally a refining process any way, expecting it on the initial iteration is absurd

 

References:

Internet Explorer 8 and JQuery 1.6.x

Recently I launched a new site for a friend http://ping.fm/0u4cy. Everything was working really well till the site was opened in Internet Explorer 8.

 

So I set about trying to figure out what was going on. Every time a link was opened the tab would crash and recover. My initial thoughts where that something was wrong with the JavaScript. So I started commenting out code to try and establish what was going on. Then I thought there might be something wrong with the CSS that was causing the tab to crash. I went around in circles for about an hour till I decided to scrap everything.

 

I commented out all the styles and scripts and the site stopped crashing the tab. Then I started adding back the references one by one till the browser crashed again. This happened as soon as I included the JQuery 1.6 min file. I couldn’t figure out what to do till a ray of sunshine hit me and I thought about the JavaScript parsing engine in Internet Explorer 8. What if the parsing engine was failing on something and causing so sort of memory leak or overflow?

 

So I proceeded to download the uncompressed version of the JQuery library and added it. Holding my breath, I refreshed the page and clicked around a few times. The site was now working!

Monday, September 19, 2011

Facebook vs Google+

I logged on to www.facebook.com today and noticed something called “smart lists”. Upon closer inspection this feature is a mechanism to group friends and view only their feeds. Nice, so now you can isolate the feeds you want to see as opposed to having to sift through endless notifications from apps your friends are using that they need “an axe to chop down trees” or a neighbour “has found your long lost gold fish” or any other arbitrary rubbish that gets pushed to your news feed trying to get you to consume the application. So, yeah neat and original idea. Oh wait, it is not original! Doesn’t Google+ circles offer the same functionality? Well I suppose it does, I mean after all if it looks like a circle and acts like a circle it must be a circle “symbol crash”.

 

Upon seeing this I remembered that www.facebook.com was suing someone over a very similar infringement of their beloved news feed.

 

So after reading an interesting article about who is suing who in the mobile space I thought I would see who www.facebook.com is suing (Google Search Results) I almost wet myself laughing when I viewed the results. So I thought, why not see who else is suing who. My next stop was who is Google suing (Results). The more I went on the more I started realising that software not only makes business supposedly run better but it is currently, single handed, funding law firms. With so much effort being pushed into suing people to get money that they feel is theirs no wonder there has been no significant break through since world war 2.

 

Let me validate that statement. World war 2 saw the discovery and implementation of:

Jet aircraft

Fuel injected engines

Ballistic missiles

Nuclear Fission 

Assault Rifles

Radar

Sonar

Precursors to the computer

Devices used in household appliances

Multi track recording

Synthetic rubber

 

and the list goes on and on. So tell me, what have we discovered since world war 2? Asides from making making computers small and more powerful? Asides from increasing the capacity of previous discoveries? What have we done in the 66 years after world war 2? Well in my estimation, squat.  Argue all you want but provide me with proof. All we have done is create a society based on rampant consumerism, technological devices get upgraded and upgraded and upgraded, even though we are using less than 50% of the actual capacity of the machines.

 

Anyways this isn’t supposed to be a rant about society, it is just a pointer to how incredibly backwards we have everything. Perhaps I should do an article about creating opportunities for innovation in this space. Maybe I will if I get time. In the mean time, let carry on suing everyone because at the end of the day surely no one in the worlds population of ~6,775,235,700 people  could possibly have the same idea as me. I mean, I am just that special!

TOGAF Foundation day 1

What an interesting day. For a while now I have wanted to do some sort of certification in the enterprise architecture realm. Mainly because I want to see if what I say all the time is actually the case and having a certification proving you know what you talking about never hurts!

 

Well I was extremely pleased with day one of the two day training presented by http://www.realirm.com. The supporting documents are clear, there are no gaps in the presenters knowledge and the environment is fun and interactive yet professional.

 

I was very please to find that my ideas are correct but today also filled in a few gaps I have been struggling with. The thing that has become glaringly clear is that the role of enterprise architect is one often misunderstood. While a technical background is a good idea, a great deal of the initial work is done outside the context of any specific technologies. This is the part I absolutely love! Being presented with a problem or in TOGAF terms a “concern” then finding solutions to that concern. Problem solving is something I thoroughly enjoy, whether it be code based or business based.

 

Really looking to tomorrow and once I have finished the foundational aspect I will most definitely be doing the next level.

 

For more info on TOGAF check out:

http://ping.fm/m2J2I

 

Other interesting links

http://www.zachman.com/

http://ping.fm/o1UBG

http://ping.fm/vw3wY

Thursday, September 15, 2011

Java Hibernate Setup

Ok here we go again. Now I am struggling get Hibernate working with the persistence unit declaration.

 

The reason I am writing this is more a pointer to myself should I ever have to do this again. Oh, check out my project on github. It is an implementation of a repository pattern using hibernate. It is extendable if you download the source and implement other providers. It is defined for standalone instances, not the full Java 5 EE stack although I am pretty sure with a bit of tweeking it can be used in that instance.

 

First I was getting the dreaded "javax.persistence.PersistenceException: No Persistence provider for EntityManager named”. After a little testing I figured out that the properties file contained an inverted commas wrapped persistence unit name where it should not have been wrapped

 

datastore.database.persistanceunit = "PU1" -> wrong!
datastore.database.persistanceunit = PU1 -> resolved correctly.


 



Ok so yeah I am rusty but bare with me. After getting that right I started running into Unable to build EntityManagerFactory. Drilling down a bit further it came down to not having an initial context. So I went and manipulated the persistence.xml file to no avail. Then I started digging deeper and found a ClassNotDefined exception (doh!). Seems I had forgotten to include the Postgres driver jar file (this is one feature I really like in C#, if you reference an assembly that references another assembly you get a warning if you haven’t referenced the dependency. Although I can see how this falls through using an XML configuration when there is no type checking happening. So the driver is obviously being created using some sort of reflection. Note to the Hibernate and JPA developers – please provide more verbose or smarter messages. Perhaps I just need to wake up!



 



Ok well, now the persistence.xml looks like this:



<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns=”http://java.sun.com/xml/ns/persistence



xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance


xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="CommunityPlatformPU" transaction-type="RESOURCE_LOCAL">

<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<property name="hibernate.connection.username" value="xxx"/>
<property name="hibernate.connection.driver_class" value="org.postgresql.Driver"/>
<property name="hibernate.connection.password" value="xxx"/>
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/database"/>
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
</properties>
</persistence-unit>
</persistence>



Right, new exception to deal with. For primary keys I prefer using UUIDs or GUIDs as they are always unique. Yes I know indexing issues blah blah blah speed related issues blah blah blah. I use it for a reason. When I transform the data into XML I want globally unique Ids so I can link via Ids. Now I usually got round this with the @PrePersist annotation (because the implementations only supported the integer values) but wanted to see if there had been any improvements since my last run in with JPA. Turns out there has been.



 



This is the way you use UUIDs as PrimaryKeys



@Id
@GeneratedValue(generator="system-uuid")
@GenericGenerator(name = "system-uuid", strategy = "uuid2")
@Type(type = "pg-uuid")
private UUID id;

public UUID getId() {
return id;
}

public void setId(UUID id) {
this.id = id;
}


 



Cool!  Next …



 



This little rig didn’t seem to like the jdbc3 drivers so switching to the jdbc4 drivers seemed to resolve that.



 



So that is that! Finally my test is passing and I am able to go to bed Smile  Well almost. Next it is time to configure the caching for the database and the connection pooling. Seems most of the libraries are included in the hibernate distribution. So the final persistence.xml file looks like this:



 



<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence"


xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 


xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="CommunityPlatformPU" transaction-type="RESOURCE_LOCAL">

<provider>org.hibernate.ejb.HibernatePersistence</provider>

<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<property name="hibernate.connection.username" value="dev"/>
<property name="hibernate.connection.driver_class" value="org.postgresql.Driver"/>
<property name="hibernate.connection.password" value="dev"/>
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/communityplatform"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>

<property name="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider" />
<property name="hibernate.cache.use_second_level_cache" value="true" />

<property name="c3p0.min_size" value="5" />
<property name="c3p0.max_size" value="20" />
<property name="c3p0.timeout" value="300" />
<property name="c3p0.max_statements" value="50" />
<property name="c3p0.idle_test_period" value="3000" />

<property name="current_session_context_class" value="thread" />
</properties>
</persistence-unit>
</persistence>


Green light on the tests, creating the database structure and persisting the information. Cool, now it is definitely time for bed, big day tomorrow, Skye turns 6 Smile



 



References:



http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html/mapping.html#d0e5294



http://docs.jboss.org/hibernate/core/3.3/reference/en/html/session-configuration.html#configuration-hibernatejdbc



http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-cache

Wednesday, September 14, 2011

Java resources (.properties)

Ok so I am making progress on a fiddle project that I am working on. I decided I was going to store the the persistence unit name in a properties file to prevent embedding strings in the instantiation methods.

 

I sat and fought for sometime trying to get the resources as a stream and came across some interesting links that explain how to do this. Namely:

http://www.bartbusschots.ie/blog/?p=360

http://download.oracle.com/javase/6/docs/api/java/lang/ClassLoader.html

http://download.oracle.com/javase/6/docs/api/java/lang/ClassLoader.html#getSystemClassLoader%28%29

http://download.oracle.com/javase/6/docs/api/java/lang/ClassLoader.html#getResourceAsStream%28java.lang.String%29

 

After fiddling and fiddling and getting very frustrated with the NullPointerException that kept on happening I was just about to give up.

 

Then I realised something. Looking at all the examples there is something I had added that I shouldn’t have

//Spot the ERROR!
Properties configFile = new Properties();
configFile.load(ClassLoader.getSystemResourceAsStream("/za/co/codeshark/application.properties"));


 



Don’t feel bad if you don’t see the problem. Laugh at me if you do Winking smile So here is the problem. If you have a look at the string pointing to the resource it has a leading “/”. Yes, this makes the path unresolvable. So it should have look like:



Properties configFile = new Properties();
configFile.load(ClassLoader.getSystemResourceAsStream("za/co/codeshark/application.properties"));


 



Notice that there is no leading “/”. Once I made this change everything started grooving and I was able to access my resource file. Once again kicking myself for not keeping these skills fresh. I find it weird though, that with all the examples of how to do this, none point out anything about how to resolve the path. Perhaps I am just over tired but I figured it might be good to make a note of this for 50 years from now!

The importance of rigid definitions รข€“ or why a verbose explanation is sometimes a good idea.

So I have been wiping the cobwebs from my Java skills and kicking myself for neglecting them. I suppose with work being focused on .NET development, two young children and a training schedule that leaves very little time for exploration on personal projects, it was bound to happen.

 

Anyway, things have changed now and I am able to squeeze in personal development time by sleeping less Open-mouthed smile. Right, lets get to the point of this article. While designing an API in Java I noticed that I was finding it very difficult to package my classes the way I was doing it in .NET so I started doing some digging.

 

My first thought was to have a look at the access modifiers available in both languages. Do a like for like comparison and see if there were any equivalents. So the C# language has the following access modifiers:

 

C#

  • Public: This is pretty much a free for all. The class can be accessed by everything inside the assembly and anything referencing the assembly. This applies to types and type members.
  • Private: This makes members of the class only accessible to operations in the definition of the class. Kinda like private parts Surprised smile
  • Internal: This allows the the types or type members to be visible from the within the same assembly. So even if a different assembly shares the namespace (for whatever reason) it will not be able to access the internal types or methods of the referenced assembly.
  • Protected: This is a member access modifier that dictates that only types that extend the declaring type can access this member. So a shared property, field, method or function that you want to be visible inside a type extending the type declaring the members but not available internally to the assembly or publically.

 

Right lets move on shall we?

 

Java

  • Public: Pretty much the same as C#. Free for all on everything declared.
  • Private: Again, pretty much the same as C# and the private parts.
  • No Access Modifier: This means that anything declared in the type or the type itself will only be visible in the package space it is declared in. Remember this! It is the topic of this post.
  • Protected: Available to types extending the declaring type.

 

Right lets get to the point of this article. Now that we have established each languages modifiers, lets have a look at this http://www.javacamp.org/javavscsharp/internal.html

 

Looking at that you will see that the C# access modifier “internal” is implied to be the equivalent of the Java default or no access modifier declaration. Does the Java definition behave the same as the C# internal definition? Well have a look at the definitions again:

  • C# Internal: Accessible to everything inside the assembly. This means namespaces moving up to the root namespace and down to the last namespace node.
  • Java No Modifier: Only available inside the package it is declared in.

 

Can you see it yet?

 

Lets have a look at a code sample real quick:

C# Code sample

//Assuming this is inside assembly my.cool.dll
namespace my.cool.project{
internal class Cheese(){}
}

namespace my.cool{
public class StartTheCheese(){
var cheese = new Cheese(); //valid
}
}

namespace my.cool.project.goes.on{
public class DigestTheCheese(){
var cheese = new Cheese(); //valid
}
}
//end assembly

//Assuming this is inside assembly my.ref.dll
namespace my.cool{
public class DoWeHaveCheese(){
var cheese = new Cheese(); //invalid
}
}


Java Code Sample



package my.cool.project

class CatchMe(){ // note that no access modifier is declared
//body
}

package my.cool

public class TheCheese(){
CatchMe catchMe = new CatchMe(); //fails!
}







You can see it now right? The primary, intrinsic difference is that the C# internal modifier can span multiple namespaces in the same assembly. The Java declaration with no access modifier cannot be seen outside the package my.cool.project. This means that there is no equivalent “internal” in Java. So here is the crux of the matter. If making comparisons, like in maths, we have to find the lowest common denominator before comparing or performing operations of logic in deciding the equivalents. Compare apples with apples to avoid confusion. Things we might take for granted will drive other people mad!



 



References:



Tuesday, September 13, 2011

Java porting and the Date string conspiracy

It has been a while since I have been able to write some Java code outside the context of Android. So I decided to take my C# NewsFeedParser (https://github.com/RabidDog/C--News-Feed-Parser) and port it to Java just as an exercise. While I have just finished the RSS content parser I have also picked up a few issues with the C# version so will be cleaning that up soon.

 

Most of the concepts where the same but I must admit, I missed the internal key word available in C# Smile. I still have to do a few tests to verify that I haven’t accidentally exposed anything in the library.

 

The one thing that was a bit upsetting is Java’s handling of date strings. The parsing of date strings requires a format to be stipulated if you are using the framework parsing mechanism. Examples can be found at http://techtracer.com/2007/03/28/convert-date-to-string-and-string-to-date-in-java/ and http://javatechniques.com/blog/dateformat-and-simpledateformat-examples/ and many other places on how to parse a date using a format. While this works when you have control of the format, it can be quiet tricky when you don’t have control over the format.

 

A bit more searching led me to http://stackoverflow.com/questions/3389348/parse-any-date-in-java and then a little piece of gold. http://darthanthony.wordpress.com/2009/05/29/java-date-parsing-with-an-unknown-format/ pointed to a project called the POJava Project. The article also pointed out that there is a handy DateTime object that has the capacity to parse dates from most strings.

 

Usage is something like

import org.pojava.datetime.DateTime;

//rest of the class definition
Date date = DateTime.parse(myDateString).toDate();


 



So now you can parse many strings into date objects. Big thanks to the guys over at the POJava project. You can find them at http://www.pojava.org/.



 



Time to go clean up the C# project Smile

Wednesday, September 07, 2011

Fluent Email and Git hub

So after submitting the changes for Fluent Email to the initial developer and him suggesting that I create a fork of the repo to contribute via, I finally got round to doing it. So what I am going to do is share my experience here.

 

First thing you need to do is git installed on your machine. The installer can be found here http://code.google.com/p/msysgit/downloads/list (I went for the full installer download). Then proceed to install the package and follow the instructions. Once you have completed that install you are going to need to set up your RSA keys to be able to connect to git hub via the bash. You can add your keys at the address https://github.com/account/ssh.

 

Right the next thing you might want to get is tortoise git. Simplifies the process of using git quiet significantly! Yes I am going to learn the command line stuff Winking smile Tortoise Git can be found at http://code.google.com/p/tortoisegit/downloads/list

 

Right now we got everything set up the next thing you need to do is fork the repo you want to work on. The instructions can be found here http://help.github.com/fork-a-repo/. Once you forked the repo you can now clone it to a directory on your machine (much like SVN), then edit away and when you ready you can commit your changes and push them.

 

I updated the Fluent Email to allow use of the template parsing outside the context of the Email class. I thought this would be handy for situations like I had recently where I needed the parser but not the email. I would have created another project for the parser but figured that the credit is due to the initial developer so left it in there. Then I refactored the addressing mechanism to remove duplicate code. It now reuses a single mechanism to parse the email addresses and names. Anyways if you interested check out https://github.com/RabidDog/FluentEmail

Razor Engine, more than just web pages

Will working with a pet project I came across the need to populate a standard message with certain data values. Enter templating.

 

First thing anyone might do is add the templating mechanism to the message distribution stack. I take a slightly different stance on this. I believe the message distribution stack should have no knowledge of the templates being used. All it should effectively do is distribute the formatted message according to the specified communication channel.

 

Classes

 

This means that the piece of code creating the message would have to assign the formatted body to the message object. Something like this:

 

Message myMessage = new Message{
From = "from@domain.com",
To = "to@domain.com",
Subject = "My Subject",
Body = "This is where my super long message that needs to be formatted will go"
};

MessageGatewayFactory.CreateGatewayInstance(MessageType.Email).SendMessage(myMessage);


 



As you can see the Body is is looking a bit smelly. This could be rectified by using an external resource. Good idea! The problem is that we might (well probably) will have to add dynamic data to the body.



 



So I started researching some templating solutions. There are some really heavy weight solutions out there. I didn’t want anything heavy weight though and I wanted to use the Razor Engine. My travels led me to a project called Fluent Email. A write up of the project can be found here http://ping.fm/q4umY



 



When running through the examples and having a look at the code I noticed that it did everything I needed it to do but not in the fashion I wanted it done. Don’t get me wrong, this project has a great deal of potential and will prove very useful to many projects, it just wasn’t exactly what I was looking for. Digging a little deeper into the source I found the Email class which contained a method to parse a Razor formatted string template and a method to read a Razor formatted file of the disk. BINGO!



 



So I extracted the two methods and went ahead and changed them accordingly and lined them up with some best practises. This is what came out:



 



Template



 



and the code looks something like this (remembering our DRY principle Winking smile)



 



Our Interface definition:



public interface ITemplateParser{
string ParseFromFile<T>(string fileName, T model);
string ParseFromString<T>(string template, T model);
}


 



Parser factory:



public static class ParserFactory {
public static ITemplateParser TemplateParser {
get { return new RazorParserImpl(); }
}
}


Parser implementation:



class RazorParserImpl : ITemplateParser {

public RazorParserImpl(){
InitializeRazorParser();
}

/// <summary>
/// Parses from file.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="fileName">Name of the file.</param>
/// <param name="model">The model.</param>
/// <returns></returns>
public string ParseFromFile<T>(string fileName, T model){
var path = GetPath(fileName);

using (var textReader = new StreamReader(path)){
var template = textReader.ReadToEnd();
return ParseFromString(template, model);
}
}

/// <summary>
/// Parses from string.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="template">The template.</param>
/// <param name="model">The model.</param>
/// <returns></returns>
public string ParseFromString<T>(string template, T model){
var result = Razor.Parse(template, model);
return result;
}

//Some weirdness pointed out by lukencode. Will be validating this further when I get a chance
/// <summary>
/// Initializes the razor parser.
/// </summary>
private static void InitializeRazorParser(){
dynamic temp = new ExpandoObject();
temp.PlaceHolder = String.Empty;
}

/// <summary>
/// Gets the path.
/// </summary>
/// <param name="fileName">Name of the file.</param>
/// <returns></returns>
private static String GetPath(string fileName){
const string tilde = "~";
string output;

if (fileName.StartsWith(tilde)) {
var baseDir = AppDomain.CurrentDomain.BaseDirectory;
output = Path.GetFullPath(baseDir + fileName.Replace(tilde, String.Empty));
} else{
output = Path.GetFullPath(fileName);
}



return output;
}
}




Usage



//Previous look ups left for brevity
var body = ParserFactory.TemplateParser.ParseFromFile("~/Templates/MyTemplateFile.cshtml", response.Profile);

var response = messageManager.SendMessage(new MessageRequest {
Body = body,
MessageType = MessageType.SayHello,
Subject = "Just popped in to say hello",
ToId = id
});


 



I decided to define the template parser as an interface to allow expansion on parser and templating engines at a later stage. When this comes about, obviously I will have to change the factory method to return the correct implementation. Yes it might be over kill for now but lose coupling is something I try do from the beginning.



 



I did submit these changes to the project in case you where wondering. When testing it in the scenarios I needed it in it worked really nicely. I really big thanks to the original author!



 



Hope you find it useful as well.



 



References:



Tuesday, September 06, 2011

Razor Engine, more than just web pages

Will working with a pet project I cam across the need to populate a standard message with certain data values. Enter templating.

 

First thing anyone might do is add the templating mechanism to the message distribution stack. I take a slightly different stance on this. I believe the message distribution stack should have no knowledge of the templates being used. All it should effectively do is distribute the formatted message according to the specified communication channel.

 

Classes

 

This means that the piece of code creating the message would have to assign the formatted body to the message object. Something like this:

 

Message myMessage = new Message{
From = "from@domain.com",
To = "to@domain.com",
Subject = "My Subject",
Body = "This is where my super long message that needs to be formatted will go"
};

MessageGatewayFactory.CreateGatewayInstance(MessageType.Email).SendMessage(myMessage);


 



As you can see the Body is is looking a bit smelly. This could be rectified by using an external resource. Good idea! The problem is that we might (well probably) will have to add dynamic data to the body.



 



So I started researching some templating solutions. There are some really heavy weight solutions out there. I didn’t want anything heavy weight though and I wanted to use the Razor Engine. My travels led me to a project called Fluent Email. A write up of the project can be found here http://lukencode.com/2011/04/30/fluent-email-now-supporting-razor-syntax-for-templates/



 



When running through the examples and having a look at the code I noticed that it did everything I needed it to do but not in the fashion I wanted it done. Don’t get me wrong, this project has a great deal of potential and will prove very useful to many projects, it just wasn’t exactly what I was looking for. Digging a little deeper into the source I found the Email class which contained a method to parse a Razor formatted string template and a method to read a Razor formatted file of the disk. BINGO!



 



So I extracted the two methods and went ahead and changed them accordingly and lined them up with some best practises. This is what came out:



 



Template



 



and the code looks something like this (remembering our DRY principle Winking smile)



 



Our Interface definition:



public interface ITemplateParser{
string ParseFromFile<T>(string fileName, T model);
string ParseFromString<T>(string template, T model);
}


 



Parser factory:



public static class ParserFactory {
public static ITemplateParser TemplateParser {
get { return new RazorParserImpl(); }
}
}


Parser implementation:



class RazorParserImpl : ITemplateParser {

public RazorParserImpl(){
InitializeRazorParser();
}

public string ParseFromFile<T>(string fileName, T model){
const string tilde = "~";
if (fileName.StartsWith(tilde)) {
var baseDir = AppDomain.CurrentDomain.BaseDirectory;
fileName = Path.GetFullPath(baseDir + fileName.Replace(tilde, String.Empty));
}

var path = Path.GetFullPath(fileName);
var template = String.Empty;

using (var textReader = new StreamReader(path)){
template = textReader.ReadToEnd();
}

return ParseFromString(template, model);
}

public string ParseFromString<T>(string template, T model){

var result = Razor.Parse(template, model);
return result;
}

//Some weirdness pointed out by lukencode. Will be validating this further when I get a chance
private static void InitializeRazorParser(){
dynamic temp = new ExpandoObject();
temp.PlaceHolder = String.Empty;
}
}


 



Usage



//Previous look ups left for brevity
var body = ParserFactory.TemplateParser.ParseFromFile("~/Templates/MyTemplateFile.cshtml", response.Profile);

var response = messageManager.SendMessage(new MessageRequest {
Body = body,
MessageType = MessageType.SayHello,
Subject = "Just popped in to say hello",
ToId = id
});


 



I decided to define the template parser as an interface to allow expansion on parser and templating engines at a later stage. When this comes about, obviously I will have to change the factory method to return the correct implementation. Yes it might be over kill for now but lose coupling is something I try do from the beginning.



 



I did submit these changes to the project in case you where wondering. When testing it in the scenarios I needed it in it worked really nicely. I really big thanks to the original author!



 



Hope you find it useful as well.



 



References:



Monday, September 05, 2011

Seriously, Do not Repeat Yourself

A few projects I have been on and off seem to suffer the same problem. Tight deadlines and the dreaded Ctrl+C, Ctrl+V. I honestly can’t figure out how this happens. Repeated logic is not consolidated into a single method that performs the required actions and returns the result.

 

Let me illustrate the problem very quickly

 

public Class1 CreateInstanceOfClass1(SuperInformation superInformation){
var myInstance = new Class1();

myInstance.SuperInfo = superInformation;

return myInstance;
}

public Class2 CreateInstanceOfClass2(SuperInformation superInformation){
var myInstance = new Class2();

myInstance.SuperInfo = superInformation;

return myInstance;
}


 



Right this is a very simplified example but lets examine it anyway. I am pretty certain that we have all gathered that these methods create an instance of class, assign a shared object to it and return it.



 



Do you notice a pattern here? Every time an instance of the object is created the shared superInformation definition is assigned to the instance and returned. Can anyone see how we are repeating ourselves? How do you think we might resolve this? Well my first thoughts would be to use a generic mechanism to create the instance and assign the shared object to the instance.



 



This might look something like



public T CreateClassInstance<T>(SuperInformation superInfo){
var output = Activator.CreateInstance(typeof(T));

output.getType().getProperty("SuperInfo").SetValue(output, superInfo, null);

return output;
}


 



Which now changes our code to in the first example to



public Class1 CreateInstanceOfClass1(SuperInformation superInformation){
return CreateClassInstance <Class1>(superInformation);
}

public Class2 CreateInstanceOfClass2(SuperInformation superInformation){
return CreateClassInstance <Class2>(superInformation);
}


 



Well that is one way of addressing the issue using Generics in C#. A similar mechanism can be applied to if else statements following the same logical flow inside different functions.



 



I guess the point of this article is this. If you copy and paste one piece of code you have replicated that code. If that code contains one bug, you have now created two bugs. If something intrinsic to the one changes you have an additional place to go and change. Perhaps I am just to pedantic but I am incline to state that if you replicate one piece of code, you would be far better off wrapping it into a general method. This doesn’t mean trying to find all places this might potentially happen. In my experience, code bases are organic (well kinda). They grow, they change, the expand, they contract. When the expansion happens, expand with wisdom, when they contract, shrink with wisdom, when they change, change with wisdom. We can all identify a pattern in our code. If you identify one, fix it. I know the deadlines are tight but taking a shortcut now my cost a substantial amount to rectify down the line when the 4th change set comes in. If you identify something that you can fix with out adding risk to the project then do it. If it means an extra hour behind the machine, do it.



 



If you are working on a legacy project and are asked to add features to the project be smart. If you see that the previous code base was replicating code, don’t do the same thing! Identify the pattern and figure out how to not replicate the code in your feature set. Do not push these changes across the whole system unless you have done a risk analysis. Keep your area clean. Be proud of your work and craft it. To often it is just a matter of writing as much code as possible in the most contrived fashion possible to prove how smart we are. I tell you something, you are going to look like an idiot when the next guy steps in and has to work with your code. Don’t be afraid to ask for help, don’t be so arrogant as to not give help when asked for. At the end of the day, the success of the project is not based on an individuals effort but the combine effort of the team involved in the project. Work together and deliver something you can all stand back and look at with admiration. If projects fail we need to take a hard look at ourselves and accept responsibility, no finger pointing.



 



Code hard, think hard and polish until  it is done and by virtue of the fact that it continually changes, it never is done Winking smile

Friday, September 02, 2011

SABC TV Licenses

So I get this nasty letter the other day, demanding I pay my TV license. I then proceed to tell them I have. I am then requested to provide proof of payment. So I attach it to an email and distribute it to the SABC and their debt collection agency. This was three weeks ago. Today I get this:

 

Your message
   To: Justin Dorkin
   Subject: Outstanding TV license account - TV License No: *******
   Sent: Thursday, August 11, 2011 9:24:23 PM (UTC+02:00) Harare, Pretoria
was deleted without being read on Wednesday, August 31, 2011 9:50:29 AM (UTC+02:00) Harare, Pretoria.


 

Though it was funny how it was “demanded” of me to pay and prove it yet when I do comply it is not even considered.

 

Another update to the unread email saga:

 

Your message
   To: Gina Grond
   Subject: Outstanding TV license account - TV License No: ******

   Sent: Thursday, August 11, 2011 9:24:23 PM (UTC+02:00) Harare, Pretoria
was deleted without being read on Friday, September 02, 2011 8:30:51 AM (UTC+02:00) Harare, Pretoria.

 

Seems no one wants to read my emails Sad smile