Wednesday 30 April 2008

New acronyms for the IT industry

I'm beginning to think that everyone who works in the software industry applies Object Oriented (OO) methodologies to everything. For example in 2001/2002 we had Service Oriented Architecture (SOA) and since 2006 there has been talk of Web Oriented Architecture (WOA), when these acronyms first appear there is normally little substance to how technology supports the theory, Web 2.0 being a prime example as it seems to be mentioned for anything associated with the Web (IMHO).


I first saw WOA mentioned in 2006 on a zdnet blog titled called "The SOA with reach: Web-Oriented Architecture" it was posted on the 1st April so I had to take a deep breath and make sure it wasn't a joke. My point is that WOA is just an extension of SOA, and isn't something newly invented, in my opinion it is just the natural progression - and shows how the IT industry can adapt to business or user demands, or just to prove that something new can be invented. Unlike technology specific trends like Java, Groovy, JRuby etc which are created to improve the original technology they either replace or are based upon SOA and WOA acronyms are labels for theories and practices. The benefit of these labels is that they provide a focus for identifying the key requirements that make up the theory, and let people in the industry categorize and know what is being discussed.


So why has it been so difficult to identify the requirements for labelling a Web 2.0 product/solution or component (WIKI entries seem to debate what Web 2.0 is, rather than provide a specific definition) . I believe it is because Web 2.0 is a social phenomenon and has been driven by business or user needs to have information in a easily accessible format and to be able to configure that data to meet their own needs. To enable this requires rich user interfaces provided through Rich Internet Applications (RIA).



More acronyms evolved from SOA:

  • User Oriented Architecture (UOA), not sure this exists yet. If it did there would be a close alignment with Business Oriented Architecture. The basics are that users would have maximum flexibility to work with UI widgets and data objects to generate screens how they want them. This could be viewed as an extension to Business Intelligence (BI).
  • Business Oriented Architecture (BOA) makes use of BPM and SOA to provide flexible and scalable systems which will enable an organisation to adapt quickly to a changing market place. I believe the driver here is senior executives and the IT department trying to meet business requirements.

Of the two I believe User Oriented Architecture is the most powerful. It can provide great benefits through empowering end users to access data on demand. It could also cause a lot of damage to a business by users manipulating data in an incorrect manner either intentionally or unintentionally resulting in incorrect decisions being made.

Is Web 2.0 just User Oriented Architecture or will Web 3.0 provide this? I believe the answer is no, if User Oriented Architecture were ever to become an adopted term I feel there would be a demand for highly configurable and yet very simple user tools (NOTE - I'm not talking developer IDE's here). These tools would initially be aimed at power users but would eventually reach into the mass market, and would not require in-depth code or development experience, the first steps are mashups or Web 2.0 components and tools such as Yahoo Pipes.

In my opinion the Enterprise is lagging behind in adopting technologies for end users, and in some cases for good reasons. There needs to be some control over how data is accessed and manipulated otherwise there is the risk of not knowing if data is "a fact" and what data has evolved from a mashup of facts to deliver what people want to hear.

Anyway back to where I started Object Oriented Design and Programming are very powerful but if you keep extending the original concept(object), things just end up becoming overcomplicated and over used so diluting the original purpose. Do we need to re-factor some of the terms used in the IT industry to make things simple and more maintainable?

References

Saturday 26 April 2008

Where are all the applets?


A recent posting on Jim's blog "Eat your own dog food" mentions www.upnext.com a cool Applet that provides a 3D view of Manhatten (usefull for SIFMA if you're going in June), I posted a comment on Jim's blog but I also wanted to make my own posting.

I'm now wondering how many other Web 2.0 Applets there are out there - upnext is one I will try to mention in my talk at JavaOne 2008.

I have nothing against AJAX, Flash or Silverlight, but I do believe that Applets are being unfairly treated and I'm surprised Sun doesn't have a library of Applet based Web 2.0 sites, or a library of Applet based products (maybe they do and I've been too lazy to find it).

So my pet project for the next few months will be to find more great Applet products....

Go see Altio at JavaOne 2008.

Tuesday 22 April 2008

AltioLive Google Social API Demo

In my February roundup I mentioned the Google Social Graph API, well the AltioLive development team decided to go ahead and put a demo together. It can be found at http://tinyurl.com/64dkc9. Jim has also made a mention about the app in his blog entry "Google Social API demo in AltioLive".

The only comment I have is that blogger blog's seem to be difficult to analyse. IMHO this is strange because Google Social API and Blogger are both Google products. So if you enter http://thompson-web.blogspot.com/ not a lot happens but if you enter http://www.heychinaski.com/blog/ it does just what's expected and finds a social network. I suppose it could be argued I have no social life which is why no network appears for me, :-)

NOTE: The tinyurl mentioned above may no longer point to the demo in the future as he demo may only be available through the Altio website

Thursday 17 April 2008

Project estimation (duration, effort) and Project Failure

Several times recently I've been involved in discussions about project estimation, sometimes with project managers and other times in general conversation about project failure. Here is my opinion on why both duration and effort are important in estimation and neither can be ignored. This my personal opinion and every project and organisation may differ and should be treated appropriately by using the correct project management and software design methods.

Background on Project Failure

The UK National Audit Office summarises the common causes of project failure as:

NAO/OGC Common causes of project failure

1. Lack of clear link between the project and the organisation's key strategic priorities, including agreed measures of success.

2. Lack of clear senior management and ministerial ownership and leadership.

3. Lack of effective engagement with stakeholders.

4. Lack of skills and proven approach to project management and risk management.

5. Lack of understanding of and contact with the supply industry at senior levels in the organisation.

6. Evaluation of proposals driven by initial price rather than long term value for money (especially securing delivery of business benefits).

7. Too little attention to breaking development and implementation into manageable steps.

8. Inadequate resources and skills to deliver the total delivery portfolio.

I define project failure as one that either goes over budget, over schedule or both, or fails to deliver what the stakeholders actually expected. Most media attention focuses upon costs and timescale, and let's face it project failure is not isolated to IT projects - Wembly Stadium, 2012 Olympic Bid, the Millennium Dome were not IT projects. Until recently Heathrow Terminal 5 was hyped as the way to run projects (Agile), yes it may have been on time and on budget but in the end it failed to meet stakeholder expectations – the users of the terminal were far from happy and executives lost their jobs.

The importance of duration and effort in estimation

When I ask for estimates I always ask for two numbers the duration and effort – just so that it is clear to me how much the work is costing and how long it will take to deliver. When it's an external contractor I'm interested in duration and the bottom line cost not the effort, so this note applies to internal projects.

Effort is the direct cost of running the project and I would expect a project manager to be able to break down the estimate into deliverables/artifacts/tasks/function points – for me they are all the same, a quantifiable item of work. The quantifiable item of work can be listed in a Scrum burn-down list or an item in a project plan, but the project plan and burn-down need to take into account duration, more on this later. This effort provides the basis for future estimation for performing the same or similar piece of work. Using a well managed timesheet system enables project managers to make better estimates for future projects based upon projects of similar size, complexity, and industry type (OK it's not quite that simple but I'm not writing a thesis here).

The effort estimate will be affected by a number of factors e.g. sick, holiday, training, going to meetings not related to the project. All the daily tasks that an employee will be expected to undertake. This is what makes up the duration estimate of the project – the bottom line is "how productive is a person each day in your organisation". So if the effort to complete a project is 100 man days but an employee can only spend 80% of their time doing productive work then the duration is 120 125 days to deliver the project.
(UPDATE - Oops. Basic math error, should be 125 days.)

Estimating duration and effort means that a project can meet schedule and costs, but accurate estimation is only possible with historic data – which is why using accurate timesheet systems is needed.

Most people in software hate completing timesheets. I know this because I did when I used to cut code – it's an unnecessary distraction and stops you getting on with doing fun things like designing and writing software.

If there is to be any professionalism in software engineering then developers and testers etc need to understand the importance of estimation. The problem is that every time a developer enters 8 hours development time when they really worked 12 just sets false expectations for the project manager and stakeholders. The next time a project is estimated the project manager looks at the timesheets and thinks "if I pay for 2 hours overtime I can get more from my team" and the team end up working 14+ hour days. OK, nobody wants this and I firmly believe in the Agile 8 hour days – even if I don't apply what I preach, but the responsibility lies with everyone on a project to ensure effective project estimation.

Applying the appropriate estimation

At Altio PRINCE and Agile (Scrum) techniques are used to deliver projects. PRINCE provides the control and communication, the use of burn-down charts and daily meetings ensure project duration and deliverables are constantly monitored.

Effort estimates are used to calculate cost and this is where it is important that staff book their time accurately otherwise a project can fail on cost because staff spent most of their time on work that was not project related and so should have booked their time accurately (and I do draw the line at having a "Rest Room" or "Cigarette Break" task). For Altio projects we use several estimation technique – the simplest being a spreadsheet that applies triangulation estimation using best, most likely and worst case scenarios.

A project manager then takes the estimated effort and populates a project plan with tasks and adjusts staff availability to get a duration.

To monitor a project in progress then duration is important and using burn-down charts with staff providing daily estimates of how long it will take to deliver being the key. This estimate by the team members is pure duration, if the person is only managing to work 2 hours a day and there is 10 hours of work left, then the duration is 5 days. It's down to the project manager to manage why the person is only doing 2 hours a day and to manage the risks that this dilution of work effort causes.

Constantly changing estimates

It is important to constantly review project in progress and adjust estimates based upon knowledge from previous projects and deliveries. Using PRINCE gateways as the time to re-estimate is important, as it is the time to provide the details to the stakeholders for them to make decisions.

Conclusion

There are lots of debates online about project estimation and ultimately every project will be different because of the people working on it, the technology being used and the expectations of the stakeholders.

Software projects are all about developing new and innovative systems otherwise we would just buy the most appropriate product off the shelf. This means there is no blue print for accurate software estimation – software engineers are not laying bricks to build a house so there is no way to say how many bricks per hour a person can lay and apply that to all projects (the analogy being lines of code = bricks).

REFERENCES

Listed below are a number of useful links and documents that I use for reference.

  1. http://www.itprojectestimation.com/estrefs.htm
  2. The Holy Grail of project management success, http://www.bcs.org/server.php?show=ConWebDoc.8418 accessed March 2007
  3. Wembley Stadium Project Management, http://www.bcs.org/server.php?show=ConWebDoc.3587, accessed March 2007
  4. Olympic bid estimates, http://www.telegraph.co.uk/sport/main.jhtml?xml=/sport/2007/02/08/solond08.xml, accessed March 2007
  5. UK Government PostNote on NHS Project Failure, http://www.parliament.uk/documents/upload/POSTpn214.pdf, access March 2007
  6. UK Government Post Note on IT Project Failures, http://www.parliament.uk/post/pn200.pdf, accessed March 2007
  7. Project Failure down to lack of quality, http://www.bcs.org/server.php?show=ConWebDoc.9875 , accessed March 2007
  8. Steve McConnell, Rapid Development, Microsoft Press,1996
  9. Martyn Ould, Managing Software Quality and Business Risk, Wiley,1999
  10. Art, Science and Software Engineering, http://www.construx.com/Page.aspx?hid=1202 , Accessed October 2006
  11. Simple and sophisticated is the recipe for Marks' success, Project Manager Today, page 4, March 2007
  12. Ian Sommerville, Software Engineering 8th Edition, Addison Wesley, 2007
  13. Barbara C. McNurlin & Ralph H. Sprague, Information Systems Management in Practice 7th Edition, Pearson, 2004
  14. Overview of Prince 2, http://www.ogc.gov.uk/methods_prince_2.asp, Accessed November 2006
  15. The New Methodology, http://www.martinfowler.com/articles/newMethodology.html, Accessed October 2006
  16. The Register – IT Project Failure is Rampant http://www.theregister.co.uk/2002/11/26/it_project_failure_is_rampant/, accessed October 2006
  17. Computing Magazine – Buck Passing Route of Project Downtime http://www.computing.co.uk/itweek/news/2183855/buck-passing-root-downtime, accessed February 2007
  18. National Audit Office – Delivering Successful IT Projects http://www.nao.org.uk/publications/nao_reports/06-07/060733es.htm, accessed March 2007
  19. Successful IT: Modernising Government in Action, UK Cabinet Office, page 21
  20. Project success: the contribution of the project manager, Project Manager Today, page 10, March 2007
  21. Project success: success factors, Project Manager Today, page 14, February 2007
  22. Six Sigma Estimation http://software.isixsigma.com/library/content/c030514a.asp , accessed October 2006
  23. Analogy estimation http://www-128.ibm.com/developerworks/rational/library/4772.html, accessed January 2007
  24. Symons MKII Function Point Estimation http://www.measuresw.com/services/tools/fsm_mk2.html, accessed March 2007
  25. COCOMO estimation http://sunset.usc.edu/research/COCOMOII/ accessed January 2007


Wednesday 9 April 2008

Using complex web services in Altio

Since 2003 AltioLive has had the ability to work with SOAP web services but we only have simple examples that use primitive variables. Recent professional service projects have required the use of Web Services which use complex input and output data types so I thought I would make some notes here before a more formal document is produced.

AltioLive was developed to make integration with SOA as simple as possible, but because of the broad set of functionality available in AltioLive it is not always obvious how a certain implement a specific solution. This note will highlight how to use AltioLive to work with SOAP messages and WSDL's.

What is a complex web service?

When I mention a complex web service I mean one that takes complex XML as input and returns complex XML. This is predominant now in the Enterprise where .Net and JAX-WS object serialization makes it easy to transform objects to and from XML, resulting in deeply nested XML hierarchies.

Listed below is one approach to using a complex web service in AltioLive, the default approach is to use controls mapped to parameters in the service request - this is the default for AltioLive.

  1. Generate a template of the XML structure for use by Altio Screens. The template XML can be generated using Static Data, AltioDB, or a request to a template object.
  2. Map screen objects to the XML template
  3. Submit the XML to a SOAP service request.
  4. Map the response data to the required location

The Template

There are several options for providing the template of the complex XML. The general principal is that the application requires the XML structure that is to be passed to the Web Service, and the controls will be mapped to this XML structure.


Probably the simplest solution is to create a XML file containing the template and use this in AltioDB or as a file on the server which can be called from a HTTP request. Then create a HTTP service request to retrieve the template XML.


NOTE: Make sure the data keys are correct, as this will be a common reason why the data does not display correctly. This will be especially important for the response data.



The SOAP request

Using the WSDL wizard is probably the simplest way of creating the required SOAP request:

  1. Open AltioLiveApplication Manager
  2. Select WSDL from the service request types on the menu
  3. Enter the URL of the WSDL that declares the SOAP service you which to use
  4. Click "Retrieve Operations"
  5. Select the "Operation" you want to use
  6. Click "Use Selected Operation"

These steps will create a new SOAP Service Request.

Because the service request will be using complex XML structure to send and receive data then the "Parts" section of the Service Request will need to be modified. By default AltioLive will create a placeholder for the data that will be passed from the client to the request, as shown below:

<tt:create xmlns:tt='http://www.bbb.com/reports/ReportInstanceAdmin'>

<tt:ParamData xmlns:tt='http://www.bbb.com/reports'>

<tt:Schedule>

<Recurrence>

<DailyRecurrence>${client.DAILYRECURRENCE}</DailyRecurrence>

<WeeklyRecurrence>${client.WEEKLYRECURRENCE}</WeeklyRecurrence>

<YearlyRecurrence>${client.YEARLYRECURRENCE}</YearlyRecurrence>

<MonthlyRecurrence>${client.MONTHLYRECURRENCE}</MonthlyRecurrence>

</Recurrence>

</tt:Schedule>

<Params>

<tt:InventoryParams>

<ReportTitle>${client.REPORTTITLE}</ReportTitle>

<TitleOpt>${client.TITLEOPT}</TitleOpt>

<ReportCode>${client.REPORTCODE}</ReportCode>

<Uid>${client.UID}</Uid>

</Params>

<ReportInstance>

${client.REPORTINSTANCE}</ReportInstance>

</tt:ParamData&gt

;</tt:create>

As the XML structure will be provided through a template the XML shown above can be replaced with:


<tt:create xmlns:tt='http://www.BBB.com/reports/ReportInstanceAdmin'>${client.PARAMS}:xml</tt:create>

The parameter reference ${client.PARAMS}:xml informs AltioLive to process the parameter as a encoded XML block and will de-encode the XML when the WebService operation is called.


Processing the response


By default Altio will generate the response template using ${response.result}, this works fine for simple WebServices but as the scenario is using complex XML then the response body is the important part of the SOAP envelope and so the syntax to retrieve the message content needs to be ${response.body}, otherwise Altio will report that no XML was provided in the message.

To implement this change edit the SOAP Service Request. The "Response" tab contains a field called "Literal XML string", this defines the structure of the XML that the results of the SOAP request should be placed into. For this example use the default value but replace the ${response.result} value with ${response.body}. So that the "Literal XML string" looks like the following:

<DATA><SOAP><execute>${response.body}</execute></SOAP></DATA>

Client Action



The final part to using a complex web service is to implement the client logic. The first step is to retrieve the template XML, this is done through a simple request to the Service Request that will return the template XML. The second step is to implement the logic to pass the data to the SOAP Service Request that will execute the SOAP operation, the syntax of the parameter to be passed to the SOAP Service Request is shown below:


PARAMS='eval(escape(xml-string(/ReportParams/*)))'

  • xml-string() – serializes the XML element into a XML string.
  • escape() – performs a HTML escape of the XML string to ensure correct transmission from the client to the server.
  • eval() – informs AltioLive that this is a function execution block and that the content needs processing.

The "Parameter source" property of the "Server request" action block needs to be set to STRING, otherwise AltioLive will expect to find a control called PARAMS and use the content of the control for the XML structure.

Aside from the service request execution you will need to map controls to the XML.


Conclusion

The use of a Template for working with a complex XML structure is an alternative approach to using controls to pass the required data as individual fields. The benefits of using a template is that it allows the XML block to be manipulated by many screens and allows the XML to be built up from multiple sources if necessary.

The negative aspect of using template XML might be in migrating to a different technology, and example would be moving from a SOAP message to a REST format where parameters would need to be passed rather than a complex XML structure. In my opinion if the request is very complex then a SOAP message is probably far more effective and manageable than a equivalent REST message.

The AltioLive online help provides further details of using WSDL's and SOAP Service Requests, and if you are working in a SOA environment it is worth familiarising yourself with different techniques for interacting with SOAP message.

Monday 7 April 2008

Getting ready for Javaone 2008




Just to back up my faith in RIA technology and Applets I decided to put Altio forward for a talk at JavaOne 2008, and I guess people want to listen to me an Jim Crossley (Product Manager and Architect at Altio) spend 60 mins talking about Applets.
Actually Jim is the real techie so he will do most of the talking.

So if you happen to be in San Franciso 6 - 9 May 2008 come visit us and join the debate on what makes a good RIA or more specifically a Rich Enterprise Application (REA).






Thursday 3 April 2008

nHibernate - Filter and Criteria for applying dynamic where clause

Over the last few days I've been getting my hands dirty trying to get dynamic filters working with nHibernate. As a future reference I've decided to post my notes.

The final solution makes use of Filters and SpringFramework, IMHO both tools that you cannot do without when working in Java or .Net.

Requirement
The system has complex WebServices exposed which provide data intensive processing based upon parameters provided in complex XML. The .Net framework deals with serialization of the XML into objects, the requirement is to apply filter conditions only when a parameter is supplied in the XML. It is essential that maximum reuse of code is maintained.

Options



  1. Write a data access object with lots of if statements. The code would need to generate a string that forms the where clause of the select statement.

  2. Use nHibernate ICriteria or IFilter objects to apply the conditions to the SQL.

Option 2 was the method of choice with little thought for option 1. It was felt that option 2 would provide a more modular approach. Now to the point of this blog entry, there are subtle differences between ICriteria and IFilter, and I would recommend using IFilter which I feel is much more powerful mechanism for controlling the data returned in the object model.

ICriteria
produced cartesian joins so did not truly reflect the object model. For example if you have a Order object that can contain many OrderLines you expect the ORM to produce one Order object that contains many OrderLines. Using Criteria in nHibernate produced many Order's (1 per OrderLine) and each Order contained the correct number of OrderLines. The more Criteria applied to the query the bigger the result set became. Also, objects returned using Criteria that contain Bags, Sets or Maps had the collection object populated without applying any filtering.

It was at this point that I decided to focus my attention on the Filter functionality provided by nHibernate and it worked perfectly.

IFilter requires a little more effort in terms of code and producing the hibernate mapping documents but it is well worth the effort. The filter ensures that only the required objects are returned and that Bags, Maps, and Sets are correctly populated with filtered objects. A filter is applied to the session and the same filter defition can be used on many objects. Individual filters can be enabled or disabled as necessary.

Solution
The solution was to implement each filter condition in a object that determined if the filter should be applied. The filter class retrieved the parameter value originally passed in the XML and applies the filter condition, nHibernate dealt with correct SQL syntax for the where clause. Each possible filter condition is put into its own class and the chained together in a IList object which is iterated over by a control class. The configuration of control class and filter classes is done in SpringFramework so as to provide maximum flexibility.



SpringFramework provides a flexible means to add or remove filters from a control class.

foreach (AbstractParamFilter paramFilter in commonFilterList)
{
paramFilter.ApplyFilter(accountingParams, session);
}

The filter classes implement either a Interface or Abstract class thus providing the polymorphism.

Each implemented filter class is implemented to decide if the filter should be enabled or disabled based upon the content of the object passed to the filter.

The NHibernate configuration needs to have a filter definition provided for properties or collection objects as shown below.

Finally a filter defintion and the parameter for the filter need to be defined in the hibernate mapping files.
NHibernate chapter 14 provides the documented detail of what to do, I personally do not believe the chapter describes the full benefits.