Mmiller2507 on December 6th, 2012

At a recent event I attended in London, where local methods in the United Kingdom (PRINCE2 and ITIL) were touted as “best practice“, I got to thinking once more about this popular buzz phrase. 

Especially about how a common or established practice can at least be assured as being used as it is intended to be, as well as how excellence in developing and/or using it can be identified and assessed for reapplication elsewhere (and hence such common or established practice can become what I would consider to be ”best practice”).

Not all practitioners have the luxury or time to attend seminars or other events to compare and share their knowledge and experience, and so my conclusions on at least two ways that practitioners on the ground can help achieve this distinction between common/established practice and best practice is as follows:

  1. Overcome the copycats
  2. Manage application of practice according to the context

This is how I came to draw these conclusions.

1.  Overcome the copycats

This comes from firsthand experience in seeing how best practice has been defined and applied. The qualification of some practice as being “best” seems to me to have (naively) come down to a lot of different people, or businesses,  following a common practice. 

That practice will have been documented (perhaps by an organisation that sets itself up as an industry authority – such as the APMG or OGC) and is then largely bought into by businesses copying it, largely on the assumption that this is what constitutes best practice. So it could be said that this is unconstitutionally best practice.

Indeed, it is perhaps not surprising that the Wikipedia definition of “best practice” even alludes to how the expression can simply refer to a description of “the process of developing and following a standard way of doing things that multiple organizations can use”. 

This definition of “best practice” is based on what is commonplace rather than necessarily best, and simply requires a practitioner repeating a well-known or popular practice without understanding any value to be gained from the results.  Ironically, I can still see some value in this copycat approach - as it can enable some form of consistency in analysing and comparing results across projects or services, but with no guarantees that there will be any analysis involved in applying the practice, any assessment of value-add, nor even any generation of the expected results. It will just generate a standard set of results, per se – such as a document or a form populated from a template.  

The focus therefore needs to be on best practice achieving business-oriented outcomes over common, standardised outputs that fit how a delivery process is expected to work.  

To counteract the common/copycat practice being considered “best practice”, and guarantee true value through relevant application of best practice, I see the role of a best practice expert as requiring some integrity, discretion and excellence in whether, as much as how, a relevant instance of best practice can be applied in any given operating scenario.

Such instances can and ought to consider how other practices can be included and successfully applied from what works in relevant  industries.  This will then deliver results that better manage or improve the businesses’ operations in that industry. 

This industry-balanced application of best practice should include  following types of outcomes:

  • mitigation of specific types of operational risk (i.e. a functional or process-oriented one, such as supply chain risk, and not just a project- or service-based one);
  • resolution of a business problem; 
  • creation/enablement of a business opportunity. 

Many best practice methods provide a template for documenting these things, however application of the practice should be focused on providing these outcomes, and the analysis of them, and not just population of a template to reflect scope by rote. Simply following the best practice delivery model will not achieve business-oriented results – nor will focus on one aspect, such as how IT is involved.  Actual analysis and answers are what is required.

2.  Manage application according to the context

My other concern with application of best practice also comes from what I have seen with the focus of some “best practice” methodologies.  There is a tendency for some of them, such as PRINCE2 and to some extent ITIL too, to be more heavily geared to producing outputs (such as a document or report) rather than business-oriented outcomes. 

In particular, this is where variations of Agile method (such as Scrum) have come to be favoured over PRINCE2 and Waterfall as a Project Management and software delivery methodology.  I believe that the reason for this is because Scrum Agile’s focus is more on creating working systems rather than (as is the case with PRINCE2 and Waterfall) producing volumes of documentation that may only be read once (if at all!) and not necessarily fully reflect what actually has been done and is now happening on the shop or warehouse floor, or in the systems related to those. 

So some discretion (or perhaps integrity?) needs to be applied to how relevant outputs are confirmed to fit expectations of the business and the objectives it wants to achieve at large, as well as the context. 

For me, thinking logically on’t, what would constitute “best practice” is a way of working that results in outcomes that have been proven and expected from previous application of that way of working. To become best, practice must have therefore been tried and tested to work in a number of common scenarios.  This, of course, needs to be in line not only with cost and timeframes  but with a rational and practical or pragmatic approach to assuring quality (and rather than relying on screeds of doco to demonstrate fulfilment of requirements). 

Such delivery practices do not have to fit the industry standard to the letter as much as fit the context of the organisation’s operations, and that of the enterprise as well as industry or industries that it operates in.  They must also consider availability of resources to analyse and assure outcomes (rather than outputs) in timeframes to delivery – and so effective resource and stakeholder engagement, communications and governance are key for ANY practice to be effective (and should be the second thing, after scope is defined, to lock down).

So Business-Aligned Best Practice is Key

I have recently seen developments in applying  TOGAF, regarded as a “best practice” framework for enterprise, solutions and IT architecture, as a way to define the organisational, business and industry context for operations – and so better align IT with what needs to be done with applications/data and technology infrastructure according to information needs that an operating model requires to achieve its goals end-to-end.

This is done through the initial focus being on identifying the information needed by key organisations at designated points in the operating lifecycle.  The practice does not comment on how the processes for developing/engineering and managing the components need to work (which can be a mix of either ITIL and PRINCE2, or COBIT and PMI/PMP)  – but it does provide the over-arching best practice to follow  if a business is needing to get a single and common view on how investment in new people, business process or technology come together to deliver outcomes according to its operational objectives. 

At its highest level (Business Architecture), TOGAF enables all operations (not just IT ones) to be aligned with business vision and strategy.  While it may not be positioned to challenge or comment on this business vision or strategy (which I see as its greatest weakness, but which may still be the enterprise architecture consultant’s job implicitly in doing this),  this practice at least ensures that focus on any form of change to organisations and information is aligned to business vision and strategy. 

All this assumes that the business vision and strategy is consistently known and understood, of course – which is another story I may talk about in a later blog.

So a best practice for me is one that, like TOGAF, is applied in a way that aligns delivery with business vision and strategy as well as specific objectives. Even if it does not necessarily help define or challenge the overall vision (which is the subject of a later blog), TOGAF touches enough on generic views of the organisation to ensure that the focus, high level design and approach of the solution are right fit. 

Why PRINCE2 and ITIL fall short in Business Alignment

The structure of PRINCE2 is such that it often misses this alignment through being too generalised as well as trying to create its own PM language, and so it is not oriented to how business organisations or industries operate and requires significant adaptation to be effective. 

ITIL is better for off-the-shelf application of best practice, through incorporating understanding of IT operations and processes, but is aligned too late with plans for changes to the business (although this has improved through ITIL V3 including Service Design). 

In my view, both ITIL and PRINCE2 need TOGAF to align with the business (although I note that some Best Practice Consultants have yet to realise this).  All of these common practices need specific instances of successful application in order to be considered “best practice”. 

Application should be pragmatic and rational

As such, it comes down to what or who’s best in getting a business to adopt and use any of these practices, and what constitutes excellence in them.

I do not believe that self-styled “Best Practice Consulting” businesses necessarily have the answer on that, and that business should be open to any rational application based on an experienced and qualified understanding of how the business is currently operating and an analysis of any option in line with what is expected as a whole.   

The preferred option does not have to be a “best practice” option, just as long as it fits with what the business is seeking to achieve as a whole (i.e. aligned with architectural principles, if not design patterns) and fits with both objectives and requirements in the specific instance.

So the business needs an open-minded practitioner who can analyse operations and delivery processes practically and pragmatically.  I have gathered together those who can do this into a discussion group on LinkedIn called The UK and European Open Practice Technology Network - and where I invite discussions from business practitioners on how the “open practice” experts in the group would approach a challenge that does not fit with a common or standard practice, or perhaps bring a project or service back into line with one (if that is what is seen as required)

P.S.  There should still be a plan for how the business can expect to arrive at the target operating model, assuming that we are having to be pragmatic and accept an exception or two in the interim to our utopian model for the business based on best practice

Feedback

I am interested to hear what other people think on’t, either on direct comment back on this blog or through discussions in the various Ecademy, Facebook, LinkedIn, Twitter and Xing social media groups I belong to

About Matt

Matt is an independent Business Service Integration Consultant, who combines a unique mix of experience as a Senior Business Analyst with experience as an Enterprise, Applications and IT Architect as well as Senior Project Manager.

He blogs on best practice in aligning IT with business processes and operating models, as well as on best sources of contract work for fellow contractors in the UK and European Open Practice Technology Network and IT Contractors UK, and the current state of the IT and Telecoms market in the UK and industry at large. 

He is also interested in finding the best ways to enjoy a good work/life balance and so blogs through social networks such as Arts Hub, London Charm and Trip Advisor.  In 2012 he collaborated in producing an independent film The Spirit of Portobello which won an award for Best Local London Film at The Portobello Film Festival.  In recent years he has had part of a book he is writing, The Road from Camelot to Canterbury, published on Arts Hub and is now looking to produce a short documentary film based on this in line with getting it fully published.

Tags: , , ,

Administrator on May 3rd, 2012

The Latest
Recent intelligence received through the 7C Alliance network about the state of the recession and its impact on IT and Telecoms (ICT) contractors suggests there’s a recruitment freeze, growing unease with costs and value of recruiters and a shift of hiring businesses towards doing their own sourcing.

The 7C Alliance’s Response

In response to similar concerns raised at the start of the first recession in 2009/2010, the 7C Alliance realised the need to make its profile visible to help distinguish good from bad recruitment practices as well as find ways to help end-clients with selection processes and hiring practices. We chose to communicate our views and strategies on this through getting involved in the emergence and development of social media to gather relevant stakeholders together.

To date this social media strategy has involved the following:

    1. Establishment and Development of the Open Practice Technology Network (OPTN)
    The OPTN evolved out of the 7C Alliance’s discussion group on LinkedIn when the group there was rebranded as the UK and European Open Practice Technology Network in order to allow any independent practitioners in IT, Telecoms and ICT to get together with practitioners at their prospective clients based in UK and Europe. This is intended to be done openly outside of any contractual obligation, and independent of recruitment agencies, so that both sides can engage in analysing the market and recruitment practices, as well as openly and generally discussing needs and focus for recruiting IT, Telecoms or ICT practitioners at large. With over 700 members of this group, a web-site is now under development to further enhance this engagement – and the 7C Alliance is seeking business partners to help finance and market this initiative.
    2. Lead Practitioners acting as Content Managers
    Leading practitioners in the 7C Alliance have now also accepted requests to support the OPTN’s specialist practices. This is to be done through content they manage or moderate, as well as promote or otherwise support, using subgroups of the OPTN’s LinkedIn discussion group. An update of each content manager’s plans for their subgroup will be posted in the coming months.
    3. Social Media Managers
    Some members of the 7C Alliance’s management team have also accepted roles to support other IT and Telecom contractor groups we are affiliated with and who are operating online through LinkedIn, Ecademy, Facebook, Meetup.com, Viadeo, Xing and other places online where information about practices and the market – as well as, of course, sources of contract work – are shared.

In response to this latest intelligence received about the impact of the double dip on IT contracting in the UK we will use a combination of discussions initiated in the OPTN’s Main group to share this intelligence openly, and then provide content in the CRABS and other relevant subgroups that presents our analysis of what we believe the current state of the market to be, our proposed approach to the perceived root causes of the recession as well as how we intend to support the ecosystem of consumers and suppliers of independent technology practitioners’ services that are impacted by it.

New options to engage and participate in the 7C Alliance
To mitigate against the 7C Alliance being seen to operate purely out of profit, the 7C Alliance is now moving to a not-for-profit (NPO) model of operation based on a set donation annually. The aims are still the same:

    * source and develop a pool of top flight IT, Telecoms or ICT contractors, operating independently of each other yet sharing 7C Alliance’s services for marketing
    * assure through coaching (now optional) that members are operating responsibly and providing services as best as they can be – and so member’s practice services and business can be independently reviewed to ensure they are aware of the current state of the market and needs for the services they can provide, as well as have increased opportunity to be the best they can be (and so ideally highly capable) and also be aware of what need to comply with local rules for doing business independently; and
    * assist these responsible contractors with access to professional and lifestyle support that enables them to get the best from doing contract work to support their chosen lifestyle in line with what it takes to be the best that they can be

How the 7C Alliance now works
An annual donation buys a membership of the NPO (and soon to be limited-by-guarantee organisation). This entitles the member to not only have a say in the interests and direction of the group but also access to all forums and sites online that are known through consultation with the 7C Alliance network to be relevant to them for knowledge on their practice(s), or for sourcing work.

A membership is made to resonate with the member’s technical capability through a brief analysis of what the member is seeking through a consultation with one or more of the 7C Alliance’s business and/or practice coaches.

The 7C Alliance Coaching Network
More extensive coaching, and mentoring, is also available to help overcome any challenges with alignment of a member’s professional and personal goals with their technical capability. This can include identifying any opportunities for increasing a member’s scope and rates for work as well as reducing costs of operations in line with lifestyle sought or other goals for doing contract work.

All coaching has been designed based on discoveries from members being coached over 8 years using the “the 7 C’s for survival in IT contracting” operating framework defined by the 7C Alliance’s founder (Matt Miller). The 7C contractor operating framework is aimed at helping to increase the IT contractor’s opportunity to understand the market for their services, determine a fair return for their services and also define or refine the lifestyle they are seeking from it.

Eligibility
Membership of the 7C Alliance is currently only open to IT, Telecoms and ICT contractors, or professionals considering contract work in IT, Telecoms or ICT. Some considerations for membership outside this will be considered if deemed to help support development of the IT and Telecoms or ICT profession as well as lifestyle of the independent IT contractor.

Ways to Join the 7C Alliance
The initial full 7C review is no longer mandatory for induction into membership of the 7C Alliance. To sign up for a membership of the 7C Alliance, without full review of the 7 C’s, please donate using the following form:














This donation entitles the member to a 15 minutes consultation by phone (or face-to-face, if based in London, UK) where the member’s needs can be discussed and relevant access or other support provided. A member of the 7C Alliance contact management team will email to organise an appointment to do this.

To sign up for 7C Alliance membership with a capability review, and either analysis of ability to compete and gain access to the market relevant to capability, or otherwise ability to manage finances and risks involved with IT contracting in the country chosen to operate in, then click on the following link:
7C Alliance Membership and 7C Strategy Review

Coaching is provided in line with the 7C Alliance’s investigation into current market for the member’s capability through sources that it understands to be trusted. The aim of it is to provide assurance to the member that they have sufficient capability and experience to operate independently through a fiscally and financially compliant business, including understanding any risks with providing services to different types and scales of business that their accountant or regular financial services provider is unlikely to understand nor necessarily consider. In some cases, an opportunity can be provided to gain referenceable experience that the member can use to justify reasons for being hired.

Tags: , , , , , , , , ,

Mike Barwise on June 24th, 2011

Although I’m a security guy and this isn’t really a security issue, I found a <sarcasm>rather wonderful thing</sarcasm> today. Attempting to obtain the adopted texts of the new European e-commerce regulations, I went to the European Parliament web site and tried to download them. The first document arrived with the file name “getDoc.do”. So did the second, and the third. Not only could they have overwritten each other if I hadn’t spotted the conflict, but no program I possess would have been able to open even the survivor directly.

On examining the internals of the file (a legal act under DMCA as it was in aid of interoperability) I found it to be an MS Word 2003 document – the easiest thing in the world to open short of a cookie jar. But not under its file name as delivered. To make use of the file I had to recognise the need to change its file extension and also to know it should specifically be changed to .doc, something I’m perfectly capable of, as I am of examining the raw data in the file. But not many ordinary citizens could be expected to know what to do with a file like this, and isn’t the European Parliament there to serve everyone in the EU?

A similar thing occurred recently on a white paper archive site. There, one selects a white paper from a list, provides an email address, and the document is delivered as an email attachment with the title of the white paper in the subject line. But in every single email the attached file is named “white_paper.pdf” or “white_paper.doc”. So each one that’s opened has a 50 per cent chance of overwriting a predecessor, and none of them are recognisable by their file names for what they are. I thought that’s what file names were for, but maybe I’m wrong. I happen to know that white papers are submitted to this site under their own original unique file names. But for some reason I can’t fathom, the web designer has chosen to suppress all the names – not throw them away, obviously, otherwise they couldn’t be used in the email subject line. But to conceal them from the customer as far as possible. Maybe there’s some deep undisclosed philosophical basis for that decision, but it certainly makes the archive difficult to use.

But why am I up in arms? Usability’s not my professional discipline. No, but both the above cases are clear demonstrations of the web designer’s utter disregard for the user of their product – and that attitude most certainly extends to security as well. My impression is that most web designers these days are so wrapped up in their own artistry and ingenuity that they’ve forgotten who pays them and what they’re paid for. The client should be king, but the client’s customers are definitely emperors.

So I say this now to all those web designers out there who feel they’re really talented. If a site you built gives your client’s customers problems, you got it wrong, however funky the special effects. Bells and whistles don’t make up for incompetence at the functional level. Or to put it bluntly, wake up, grow up. And if that doesn’t work, give up – and leave web design to those who can deliver products that work properly.

Mmiller2507 on May 16th, 2011

Following seeing a posting by one of the bloggers in the Open Practice Technology Network LinkedIn group, I attended the seminars associated with the CRN Partner Connect show-case in Coventry last week.

It seemed more like it was the Partner Correct Conference, as it had all the large players in operating platforms and infrastructure – including Google, HP, IBM and Microsoft all having their say, but apart from Amazon, Apple, Oracle and any one from the Open Source side of things, of course.  The latter seem to be in the “out group” or their own world (a cloud?) away from this set of players - so where’s the IT conference or exhibition that includes everyone in the industry nowadays?

Nevertheless, the grouping at this event did give me a good insight into how most of “the old” and/or big proprietary players position their local resellers for life with new and big corporates, and it’s the best event I’ve found yet for walking (or sitting) around and getting an overall view on what’s new infrastructure-wise, as well as ways of comparing and proselytising one’s position if one is a reseller in the marketplace for IT (and now digital telecoms).

The Scene From The Seminars

There was still an element of these big players being cats circling one another from what I gleaned from the seminars with key reps (and quiet words with a few afterwards).  Judge for yourself from this short summary of sound bytes from the ones I attended:

  • Cloud computing is a business model, not an IT one, as nearly everyone espoused.
  • Google originates from the cloud – yet I did challenge  its partners’ positioning on its service offerings, as I had found from independent consulting work earlier this year that these are not clear – and hard to know how to find and compare one partner. Even Googling them does not help with knowing who’s best!  
  • IBM sells business services now, not IT – however it is looking beyond the cloud and sees Smart Cities as being what’s on its horizon, and so living in a world where energy, telecoms and IT are sold and supported as one small box that sits in a corner somewhere in the house, as well as everything being auto-responsive in turning power up and down as maybe asking or telling you what to do in various operating scenarios.  Somehow I kept thinking of that movie “Transformers” throughout the presentation by Simon Baker (but “Space Odyssey – 2001“ now, as I write this). Still, it was good not to have anything more said on the cloud by the time I attended that one!
  • Microsoft thinks its in the cloud – but all other vendors think that it has no idea, other than Hotmail, about what the cloud is. That said, buying Skype may have changed that…
  • the Chrome netbook starts up in six seconds and all its apps are resident in the cloud (sorry, web?). This advice on a device came from the presentation by the President and CEO of CompTIA, who had brought every one of over a dozen different tablet and mobile devices with him and is not linked in with anyone in saying this…. 

In summary, the future’s not bright, its cloudy. Indeed, so the Google Partner rep advised, only they talked about the cloud three years ago – and now everyone wants (to offer) one. Really it was only IBM that seemed to have anything different and farsighted to talk about than the others – but then they are about business services now, right, and not just infrastructure!

The Scene from the Sidelines

Besides the big players, the outsider in this in-group was an organisation called CompTIA which positions itself as supporting no one vendor in particular, but being there for the IT professional in general. 

CompTIA is an international not-for-profit organisation that is a mix of an independent training company and vendor-independent technical certification authority that IT professionals can use for working with principles and concepts of any infrastructure. They also offer networking events for bringing IT professionals together who work in supporting  IT infrastructure.  

From my brief look into their offerings, ComTIA’s certifications do appear to be neutral as far as software is concerned – and are in things like project management, networking, etc – however, perhaps naively since they are new to operating here in the UK and Europe, their certifications are geared more to working in an American corporate world as they ignore best practice defined by organisations like the OGC such as PRINCE2, ITIL, etc. When I attended their drinks later I got a chance to query this in talking direct with their President and CEO, Todd Thibodeaux. Todd advised that their certification aligns more with PMI. So why not just offer PMI certification, I thought?  Oh, yeah, that’s because they are vendor-independent….

The thing is, I did happen to meet a fellow independent contractor there from the IT backup world who had used their services and sat through Todd’s presentation with me. Even Todd’s lead-in to the presentation clearly showed that the UK was new to him from not having brought an adapter for UK power supplies, and so being faced with power from over half of his devices running down until he could find one – as he promptly demonstrated by using this dilemma as an excuse to pull each one of his dozen devices out to show us all how much of a gadget junkie he is.   

The CompTIA courses are not totally new though, as my insider told me that he had found CompTIA’s training and certifications to be very expensive for what you can leverage from them – which is rather strange for an organisation that touts itself as being not-for-profit. So maybe a local reality check is in order for CompTIA before claiming to be open to one and all IT professionals?

Still, I appreciated being able to attend CompTIA networking drinks at the end of the conference and how their President and CEO said he would be happy to get someone to host an event for us to attend and form our own opinions of them in supporting the IT professional.  Maybe we in the OPTN can educate them at a networking event, as much them us, about what UK and European IT professionals really need to help support us in doing IT work here, as well as advise them on where they can buy a power supply for adapting U.S. IT networking and sales people’s devices for working here in the UK and Europe….

Tags: , ,

Mmiller2507 on April 1st, 2011

Going from Physical to Virtual

One of the things that I sought to find out at the start of using LinkedIn discussion groups back in 2008, as well as social media, was about the value of virtualisation for a number of prospective contracts I was looking at.  This was because there were a lot of views about how to do it, but not a lot on the cost model and what returns it can provide to the business if a solution is designed and configured effectively to use it.

I found out that the primary value-add with virtualisation back then was the reduction in the number of physical servers that have to be purchased and managed – and basically getting more with the same or less tin and string.

What was not necessarily noted and understood then, however,  is that there is a lot more precision required with how applications need to be configured to work with the partitioning of the one server into many virtual ones.

The challenge is that there are a lot of assumptions that need to be made about the allocation of CPU, RAM and other resources required for that one server, as well as how the services can be expanded if required.

The risks are:

  • Performance and/or storage management issues if the architect has not factored in a good enough margin for error into the estimates of the mix of technical resources required
     
  • Insufficient reaction time to respond operationally if the environment manager has not deployed monitoring tools that have intelligent alerts based on analysing thresholds in line with the architect’s estimates.

There is also subtlety required in relationships too, as the architect and the IT operations or environment manager have to have good communication and a good understanding of one another, as well as respect for what each other can and does do.

….And then there’s outsourcing it

Unfortunately this relationship has been complicated since some  businesses have decided that it is all too much to understand and manage themselves, and therefore best to assign ongoing responsibility for service levels to an outsourced service provider. 

This has resulted in the responsibility for the architect and environment manager relationship being passed on to an outsourced service delivery manager.  This is someone who needs to understand both the client architect’s work and estimates as well as make sure these mesh with the many different clients’ concerns of the environment manager, who is now providing service on behalf of the outsourced service provider and no longer dedicated to that one business. 

The business who has outsourced the service therefore needs to consider that is not just that the service provider has to have a proven capability to readily configure servers, and advise on how to ramp them up according to levels of usage, but understand how they are going to balance their concerns with that of different customers sharing the same rackspace and access to services. 

Basically, outsourcing often amounts to socialising and politicising computing – and so, to counter the risks of this, the client business needs to have:

  • different choices of service provider available
  • the flexibility and capability to switch in the event of consistently poor technical operations and customer service to explain that.

The Outsourcer’s Approach

To counter perceptions, the savvy service provider will therefore often bring in a Technical Account Manager – who understands environment management and customer service – to talk with the client’s outsourced service delivery manager.  These characters analyse the situation and ensure clarity on the understanding and facts, as well as assuring that the client’s needs for their business are balanced with those of all other businesses sharing the same resources in that shared virtual environment.

Often this relationship simply amounts to allowing for sufficient security and data protection to prevail, as and when required due to the shared nature of the virtualised beast.  The issue is that any transparency over operational statistics is non-existent, if not rare, and so a business assuring that it is getting a fair share of the physical server’s resources cannot be guaranteed by any means due to commercial confidentiality and data protection laws.

So, who dares care and do something about the outsourcer?

The thing is, though, there is still an ongoing need inhouse for configuration design and estimation of load expected on the servers that requires, virtual or otherwise. So what typically happens is that the architect role, where I am now often positioned as much as having once been a senior project manager, becomes a hybrid one. 

That is, the role not only covers doing or managing design but also defining and assuring service delivery and being the linch-pin in delivering into and assuring that the service provider is providing the service that the customer expected.   Often it requires managing software vendors and/or developers - as well as hosting service providers – and so the whole enchilada of the application and its infrastructure being integrated into the online enterprise.

To solely concentrate on the IT Architect’s role however – where there is the luxury to do that – that nowadays is part of a three way face-off between internal Technical Operations and Service Provider Account Management.  It is one that I have noted recently that some inhouse  architects and environment managers are not necessarily personally or socially equipped for.

The key to success of the Architect’s role is therefore not just to work out how best to virtualise from converting business forecasts, or statistics on past use of the apps, to volumes of technical bits and bytes to be transacted and/or stored.  The design must be based on what the service provider can do as much as determining that they can support the type and scale of the applications to be served.  

So good skills in vendor selection, evaluation and management are key to the modern IT architect – and they need to be service managers as much as designers. There is therefore a clear career path up to IT Director or CTO for the ones good at managing both aspects – or into the Enterprise Architecture teams that are now emerging (but that’s another blog waiting to be written).

Enter The Cloud

Cloud computing services have now come along to reduce the IT architect’s and businesses’ concern over planning and accounting for online transaction volumes and storage,  but they have still not reduced complexity of configuration design however. To understand both how and why that is, you have to know what came before the cloud and what came after.

Traditionally, the hosted service provider model required that the client’s technical architect design the infrastructure configuration to support the application (or applications) and estimate volumes for peak load. The business would then sign a contract for 12 months, or more, with a hosting service provider that offered the best deal in managing according to this.

With cloud computing services, the big advantage is that virtualised servers are architected according to how much is actually used, rather than estimated, based on performance and load testing done in the course of testing.

The difference with cloud computing services is that costing of service provision is based on sizing for normal operation but, as load increases/decreases, the cloud computing service provider will add/reduce the number of VMs in use to handle it and then charge for the extra as and when it is used. This can be manual or automatic.

This approach makes it far more cost effective, as you’re only paying for what you need, not a years’ worth of peak demand. It also means there’s less pressure on the architect to get those estimates right – however there is more emphasis on monitoring and accounting for cost of services. 

The latter ought to be tied in closely with understanding the strategy of the business and plans for expansion, especially if the business operates purely online or is heavily reliant on its online channel.  

The Cloud Effect

So the primary benefit of cloud computing services are the pay-as-you-go (PAYG) and the fact that they can be provided much like a utility supplies gas, water or electricity.  Quite frankly, the CFO and COO should love it!   

So a lot of the architect or senior project manager’s role now comes down to selecting the right service provider for the jobs (sic) as much as doing or quality assuring high level design – however the cloud ought to reduce emphasis on managing operations and put the onus back on good cost-effective design. 

So Who’s Who – and Best – Cloud-wise?

Knowing who’s best to use cloud-wise is a totally subjective question. It comes down to knowing the nature of the application and the security it requires, the technology stack required to operate it and whether the service provider supports it, and often many other factors.

Amazon EC2 is the best known solution, however Amazon EC2 is no more public and/or shared than any other public cloud provider. They all have limits.

The way to look at the cloud service providers is that they are a web service supermarket – they sell building blocks LEGO style in different sizes which you plug together to create solutions, as well as provide a number of geographic locations in which you can deploy them.

Wedding inhouse with the Cloud

Normal hosting providers are able to provide more customised offerings in terms of SAN etc – but at a cost. Cloud is NOT a panacea – which is why a lot of companies get burned on it, they get carried away with it being cheap and fail to consider the important questions, like “does this environment suit my business in other respects apart from low cost?”

Amazon do now also have a sort of equivalent of dedicated VLAN/SAN – they call it their “cluster computing” instance – you can plug these together and get fully bisected 10Gb end to end, dedicated network bandwidth between them. However it’s only available in the US-East region right now.

You do not need to be wedded to the cloud, as there’s also room for hybrid designs – e.g. you can VPN your own back-end stuff right into AWS cloud, thus making use of existing storage infrastructure, but have the rest operate independently.

It is therefore possible to have a happy wedding between inhouse and cloud-based services.  It just depends on what is critical in the trade-off between security and data protection with performance and storage – but then there may be commercial factors to, depending on the operating policy of what data or information that the service provider is prepared to host.

Architects and The Cloud

As far as the technical/IT architect is concerned, use of the cloud takes pressure off precision in estimating transaction throughput and data storage volumes in order to keep operating costs down.

Instead, there is more of a need to know how applications work together – and so it is not just knowing what and how an application will individually process and manage data,  but rather what data it needs where and when as well as how it needs to share that with other applications internally and the outside world. So architects need to be more business if not functionally aware.  

The IT architect’s challenge is therefore now, more than ever, on enabling a consistent and effective enterprise end-to-end – and perhaps less focused on IT and more on fit to business strategy. In short, they need to become more of an Enterprise Architect – if not one already (but perhaps without realising it?).

Acknowledgements:  Jack Knight

I would like to thank Jack Knight for providing the environment and operations manager’s insight into this blog from experience we had working on a site where we were able to consider Amazon Cloud against other forms of service providers.  He has also heavily influenced what and how I should consider architecture in this brave new cloudy (and now foggy) world we live in. 

Unfortunately we were not able to use Amazon for commercial reasons,  however Jack’s indepth knowledge and advice on the alternatives was integral to helping me to advise and justify alternative means to support large volumes of online transactions and storage that the business expected to have for a comparison site.  So, to get yourself environmentally and operationally sound, I would highly recommend you to contact Jack.  He is available through LinkedIn at http://www.linkedin.com/in/jackknight 

About Matt

Matt is an independent Technology Practice Management Consultant, who combines a unique mix of experience as an Enterprise, Applications and IT Architect with work as a Senior Project Manager (as and when required)

He blogs on best practice in IT and best sources of contract work for fellow contractors in the 7C Alliance and the IT Job Board, as well as on the current state of the ICT market in the UK and industry at large. 

He is also interested in finding the best ways to enjoy a good work/life balance and so blogs through social networks such as Arts Hub, London Charm and Trip Advisor.  In recent years he has had part of a book he is writing, The Road from Camelot to Canterbury, published on Arts Hub and is looking to get this fully published as well as write more in his spare time in between contracts.

Tags: , , , ,

Mike Barwise on March 29th, 2011

A well-known joke from my school days was “Preserve wildlife – pickle a squirrel”. That’s what we call a “point solution” – a narrowly targeted fix for a tiny part of a much larger problem. Similar point solutions are “nature conservation areas” and the various specific countermeasures we’ve developed against the ways attackers have found to break into our online systems.

Out of the mass of fine detail we can extract three primary ways to compromise online systems that are currently widely used by external attackers. The first is “SQL injection” – used to tamper with the database behind your dynamic web site. An attacker crafts the content of his (they’re usually blokes, so no sexism intended) response to a field on one of your web forms – remembering that URL parameters count as form fields. He embeds SQL commands in the form field in such a way that if they are passed to the database engine unfiltered they get executed on the database. Whole databases, records or fields can be deleted or modified and private fields containing passwords or confidential information can be read.

Bad, eh? But the second attack type ain’t much better. Cross-site scripting (XSS) allows an attacker to turn your web site into an infection point that will contaminate the computer of practically everyone who visits it with malicious code. This is the preferred way to enrol “zombies” into “botnets” – huge networks of remote controlled computers that are rented out by the criminal underworld to send spam and attack business and government systems for a fee. XSS is also done by crafting the content of a form field – this time so that JavaScript gets included in a database record that is passed back to users as part of a dynamically generated web page. When that page is loaded by the victim’s web browser the malicious JavaScript is executed.

The third common attack uses a rather more insidious mechanism. A document in a well-recognised format (in recent times, particularly PDF, Flash, Windows icons and various graphics formats) is created and tampered with so that when it gets opened by the appropriate application on the victim’s computer, it crashes the application, allowing the computer to be totally compromised. This attack is quite often implemented via tempting (to some) document titles – “see this lady naked” &c hosted on dodgy web sites. In that case it’s a matter of “more fool you” and I have no sympathy with the victim. But the attack is also perpetrated via XSS – the tampered document being retrieved automatically by the injected JavaScript. In that case the malicious document needs no fancy title to entice, nor does it need any specific content other than the tampered features – it probably never gets seen by the victim at all. And in this automated guise it can potentially be successful against servers as well as clients, supposing for example that the server issues internal reports using HTML with scripting.

Apart from the “more fool you” variant, these attacks all depend on one thing – failure on the part of the site developer to properly validate user input from forms. I’ve written about this in detail recently elsewhere, so here I’ll restrict myself to saying it’s inexcusable – the most basic rule for newbie programmers is “never trust user input”. But even the pure “more fool you” depends entirely on some bug in the application that attempts to process the malicious file – and a bug is no more or less than a mistake by a programmer.

There’s a folk wisdom that software is more complex than anything else we do, so no developer can possibly be expected to produce fault-free code. But I take issue with this on two fronts. First, I’m not talking about “fault free” – I’m talking about not making glaring errors that have been well documented and regularly perpetrated for decades. Second, if an airliner has software on the flight deck (as they all do now) it logically follows that the airliner as a whole is more complex than the software it uses. But we don’t say “oh dear, the wings fell off in flight – but the system is so complex that nobody could be expected to get it entirely right first time”. And for good reason too – the financial implications of failure are huge and come directly home to roost. Regardless of the ultimate source of the problem, Rolls Royce have taken responsibility for the recent Airbus A380 engine failures, as did Toyota when the brake and accelerator problems emerged. So although we can never exclude the possibility of errors, they are rare in such branches of engineering because they’re based on established bodies of proven theory and practice, and are performed with vigilance and forethought.

In commercial software development the position is entirely different. Just for example, Microsoft release a average of half a dozen bug fixes every month – it’s an established ritual called “patch Tuesday”. And other vendors release “patches” – corrections for programmers’ errors – at least as regularly. The standard “shrink wrap” license absolves the vendor from all responsibility except refund of the purchase price within a very limited time window. Apart from that, you’re on your own. According to the T&Cs, the software doesn’t even have to perform as advertised. Indeed an expensive package I bought once because it supposedly included a specific rare feature didn’t even support that feature until the next upgrade. And service pack 1 (a roll-up of the bug fixes to date) for Windows 7 released in February 2011 caused some computers to freeze with the dreaded blue screen. A few people moaned on the web and a fix for the roll-up of fixes followed at leisure. But this didn’t result in the wholesale abandoning of Windows 7 – because there isn’t any real alternative.

This licensing regime is so well established that it has the force of law in most countries, despite the high probability that it would be overturned as an unfair contract in any other branch of commerce except possibly investment banking. And there you have it – the financial clout of the vendor wins the day, regardless of the best interests of the customer. Whether or not the software works well or is secure is thus largely an externality for the vendor. Provided too many packages don’t get returned for refund, the vendor’s interest pretty much evaporates once you’ve paid for your license. It’s clearly more cost-effective to issue bug-ridden code and then follow up with free fixes than it is to get the software right in the first place – otherwise it wouldn’t be the standard approach of the industry. Add to this the race to encourage churn – future revenues depend entirely on your already saturated market scrapping last years’ version for your new one – and it’s clear that globally very little attention is actually being focused on the robustness of the product or the exposure of the end user.

The nature of the IT industry is mimicry – I believe it was Larry Ellison who said “there’s only one industry more fashion-driven than ladies’ fashion, and that’s IT.” So the burgeoning population of bespoke web and mobile developers has followed suit – not only in the use of similar contractual exclusion of liability, but also in the early adoption of ever-more complex and internally obscure development systems and run-time environments. Not to be at the “bleeding edge” is to be out in the cold when it comes to your next move up the career ladder. So this sector is also subject to churn, which means that many development tools, libraries and run-time environments are widely deployed for clients long before their robustness has been adequately verified.

The general thrust of the advances in development systems and run-time environments is increased abstraction coupled with growing reliance on libraries of ever-more bloated and complex pre-defined methods or functions. This makes the development process quicker but has the side-effect of separating the developer ever further from the executable code. So even the inquisitive programmer who wants to see what’s going on under the hood finds it harder and harder to do so. And as markets are dominated by certain vendors, it also makes the way in which the features of the end-product software are implemented more and more standardised. This eventually results in an effective monoculture which is fragile against attack, both because any weaknesses get widely known about and because the target becomes large enough to be worth exploiting. And believe me, weaknesses abound. The shortest and cheapest path to the next iteration of the product is that of evolution and encapsulation. Thus bugs in the predecessor will often migrate undetected into the successor – in some cases persisting for many years.

Putting all this together, it seems we’re backing a loser. We surely are if we continue exclusively to produce point solutions for individual symptoms of this huge problem as they become manifest. We need a new approach to software design, development and testing – an approach that could more legitimately be described as engineering. We need a new echelon of developers who have been instructed in the first principles (they do exist for software engineering – we just usually ignore them), versed in the necessary level of attention to detail and equipped with the forethought to anticipate the complex mesh of possible interactions between online software and the real world. Above all, we need the development community to internalise their clients’ requirement for security even if it’s not explicitly voiced at contract time – to realise that in this ever more interconnected world, the global network of dependencies is only as strong as its weakest link. For the want of a nail…

©2011 Mike Barwise, Integrated InfoSec

cleslie4940 on March 29th, 2011

Too unruly for the Office of Tax Simplification to handle, IR35’s red blemishes are being covered over by a sticking plaster.

Following publication of the Office of Tax Simplification’s review of small business tax, the government has decided to retain IR35 as abolition would apparently put substantial tax revenues at risk. The government promises improvements in the way IR35 is administered.

These proposed improvements will apparently include guidance on those types of cases HMRC view as outside the scope of IR35. This will make an interesting read. In my opinion it’s what the law says that counts, and with the greatest respect, not HMRC whereby it often appears that no two status inspectors can agree on which factors to ignore. How many times have contractors had to overcome HMRC employment status perceptions and had to prove a negative when confronted by an HMRC IR35 Enquiry?

IR35 is now 11 years old and such a thorny subject even the OTS can’t decide what to do with this issue – more sticking plaster is surely not the remedy to cover spotty legislation.

How about HMRC gaining a basic understanding of work breakdown structures, concepts such as project management of labour and materials with Gantt terminal time lines and delivery of work to budget and so forth. When exploring genuine arrangements between client and contractor, everyone would gain a greater appreciation of contextual factors and apply the key status tests/principles of mutuality of obligation, control, personal service, financial risk, etc, etc.

I would suggest HMRC publishing the obvious guidance would be too simplistic and futile – a bit like HMRC status officers hearing hourly rate and assuming employment status without considering the genuine arrangements?

Chris Leslie
Director, Qubic Associates Ltd
(DD) 0191 493 4940 (e) cleslie@qubictax.com

Tags: , ,

Administrator on November 16th, 2010

Mike Barwise, a member of 7C Alliance’s Open Practice Technology Network and Group Manager of the Information Governance subgroup, is interested to hear from businesses who may have been seeking answers to questions like these:

  1. What’s the most effective way to control staff use of social media at work?
  2. How can we best manage corporate use of mobile devices?
  3. I’ve heard about the recent implementation of £500,000 fines for leaking personal data. How can our organisation most effectively cover the risk?
  4. What’s the best way to prevent corporate data loss?
  5. Our organisation has a huge number of security policies. How can we rationalise and reduce this volume to something more manageable?
  6. How can we be sure our security policies are working?
  7. What’s the best way get the security message across to our staff?
  8. How can we put a value on our business information?
  9. How can our business keep up with constantly changing internet security threats?
  10. How can we ensure the quality of our information security risk decisions?

Please contact Mike for answers to these questions, and more about information security, by email to mbarwise@intinfosec.com or by phone to 0845 463 1624

A talk on information and IT security is also being planned through OPTN London for February 2011. This is where you can meet Mike and find out more about information security as well as meet other members of OPTN London in person. Please register your interest for this event now by email to register@7c-alliance.com or by phone to 7C Alliance Events team on 0844 844 2470.

For finding details about other OPTN events in London, or for networking online and in person with OPTN members in London, please join the OPTN London UK on LinkedIn.

The Open Practice Technology Network is the first business network created by the 7C Alliance and is for anyone who is generally interested or involved with managing information online for business purposes as well as for those interested or involved with managing, supporting and using information and telecommunications technology (ICT). The group is open to anyone joining who is not restrained in discussions about ICT for trade reasons (i.e. it is for those who are able to openly discuss effectiveness and efficiency of a particular vendor’s technology for doing business)

Tags: , ,

Administrator on November 13th, 2010

What a great night we had at the 7C Alliance’s launch of London Charm, it’s first social networking group, on Thursday 11/11 at The Tabernacle in prestigious Notting Hill, London!

Everyone seemed to LOVE the magic and charm of the world music performer we’d discovered over the course of 2008 to 2009 in getting out to discover live music around London. Milli Moonstone is that performer and we hope to see more great gigs by her, as well as perhaps hear her and her songs on the radio (and we all agreed that songs from her album, Lose Myself” sound great on an iPod!)

Thursday night’s crowd also said how they loved the idea behind the London Charm networking concept.

The idea of London Charm is to provide a place online to meet others in London with similar tastes in music, as well as specific bands and venues that each other likes. From learning what each other likes online (and marked with the Facebook “Like” feature at a minimum), the members are then given opportunities to meet up and go out to discover those great acts with others who like them, or simply to discover places to see great music performed live around London.

The numbers in the group have now been boosted by 20 in only a few days after the intimate gig (which was limited to 50 guests only and had 20 existing members attend with a friend).

The night was also a good test to see whether we could apply 7C Alliance’s structured networking concept to social networking events, as much as we have done to business ones (as developed and trialed last year with the London IT Contractor Alliance Meetup group – which has now been aligned with the online group, the Open Practice Technology Network, to include anyone with an interest or involvement with technology).

The structured networking concept works based on the event coordinator preparing a summary profile of guests attending based on what they can glean from the guests’ profiles shown on the host networking tool that they use (i.e. Facebook, LinkedIn, Ecademy, etc).

This summary guest profile list is then printed and made available to everyone at the event so that they can check it, and see who they want to network with on arrival, as well as having a few details as hooks for conversation starters with those people.

The idea is to beat having to go through the standard: “What is your name, what do you do, what are your hobbies, etc” as well as having some details to hand for the networker that are targetted towards a more specific focus and purpose for the networking event.

For instance, last Thursday’s event was not just about meeting and seeing Milli perform but also about people’s interests in live music around London – and so the Guest Profile List showed people’s music interests as well as what other interests they had, where known or shown on Facebook or LinkedIn. People could update their details on the night too, and so this is then available in the networking group’s registration database for use at future events.

If you would like more like more details about the 7C Alliance’s structured networking, and assistance with building a business and social network, then email contact@7c-alliance.com .

Oh, and please feel free to join London Charm, the first social network that the 7C Alliance has created on Facebook, at http://www.facebook.com/home.php?#!/group.php?gid=148304341899 , or otherwise the Open Practice Technology Network – the first online business network it has created on LinkedIn for independent technology professionals in finding work and the lifestyle they want to go with it.

Administrator on October 23rd, 2010

Back in 2007, following a social networking event organised by the 7C Alliance at The Prince Albert – a bar in London’s Notting Hill, a few of the people stayed on at the pub afterwards and met some people who were wanting to have a good night out together where they could discover the tucked away and hard to find corners in London for good live music – but did not know where to start. From that chat, the idea for London Charm , an online networking group that helps each other discover where to see good live original music and go out together to see it, was born.

Members of the 7C Alliance firstly set up a simple web-site (created by Steve Foster) and, later, a Facebook Group was created and built up with help of friends from Patrick Whelan’s and several others’ networks. Matt Miller then tested out promotion of the group through different social networks such as Secret London, a Facebook Group with over 200,000 members in it, as well as people on MySpace and myvillage.com. He then also shared details about it by word of mouth at gigs at a few places that people said they liked.

From doing all this, numbers have begun to build up and so we are now looking to find people interested to help us work out bands and venues to see, as well as get people out to gigs together – including making a few gigs happen like the one on Thursday, 11/11 @ The Tabernacle in Notting Hill.

The gigs we will look to make happen will, for now, be at The Tabernacle in Notting Hill, a venue that has a performance space in it that can hold up to 250 or more people and has had the likes of Take That, Lily Allen and others play exclusive gigs there.

We have also reserved the Conservatory at The Tabernacle on Thursday, 11th November for people to come together before the performance and meet each other as well as discover what each other has to offer in bringing online business networks like those typically managed or participated in by members of the 7C Alliance, together with social networks such as London Charm, Secret London, MyVillage and others on Facebook, LinkedIn and MySpace.

Later on, we will celebrate these new beginnings with music from Milli Moonstone, one of the acts discovered by London Charm, playing music from her new album, “Lose Myself” with relevant lyrics from the number, “New Day”.

So we look forward to seeing you there at what we expect to be a unique night for both business and social networking, as well as enjoying good live music! Tickets are £15/head and include price of canapes and performer. They can be bought online at http://www.wegottickets.com/event/97497

Please note that numbers of tickets are strictly limited to 50 – and so please get in quick to not miss out!

If you are based in London and want to join London Charm to meet others interested in live original music and share your sense of style then click on the following: London Charm social network.

New Beginnings Flyer

Tags: , , ,