The Server-Side Pad

by Fabien Tiburce, Best practices and personal experiences with enterprise software

Fresh Thinking

with 3 comments

I really enjoyed attending a FreshBooks workshop today, held by Mike McDerment, FreshBooks’ co-founder and CEO.  FreshBooks is the leader in online billing, a rapidly growing category that FreshBooks essentially invented and is continually perfecting.  The workshop, Building a Web App Business, addressed the pre-launch phase any web startup faces and covered topics ranging from building to marketing, product management, metrics and financing.  I literally took ten pages of notes in four hours.  Mike is a wealth of information, candid with his answers and refreshingly transparent.   The session was truly inspiring and I will be drawing on today’s take-aways for some time.  In the meantime, I wanted to wet your appetite with a few tidbits, a number of quotes from Mike McDerment and the context in which they were given.  If you find them as thought-provoking as I did, you really owe it to yourself to get to know FreshBooks and perhaps attend a future workshop.

Freshbooks “We get paid to be an aspirin, a pain killer”.

Launch features “Build the least, you’re in a vacuum”. “You need to build the minimum number of features you need and engage people using it [the service]”.

Feedback “Make it easy for people to give it to you”.  “Make it easy for people to reach you, by telephone and email”. “You want to remove barriers”.

Ownership “I want people to act and behave as owners”.” Don’t stack chances against yourself by being a control freak or by putting up with one”.

Founding team “Trust, honesty, loyalty, openness”. “Passion is your fuel”. “If you call them at 3AM, are they going to answer the phone?”.

Lawyers “Lawyers get paid”. “Lawyers are risk reducers”.”If the dark clouds come, things break down gracefully”.

Office space “You do not need an office just because you have a business”. “Rent will increase your burn”.

Categories “Don’t underestimate the power of categories”. “What makes you unique?”.

Choosing a name “Easy to remember.  Easy to spell.  Describes the category.  Describes the benefit.  Describes the difference”. “A name is a vessel, you can build a lot of meaning into it”.

The story “Your story is going to influence your strategy more than you  know”.

The homepage “The home page must answer: what is it?  who is it for?  why it matters”.

Blogs “Outspend or out-teach”. “If you don’t have something interesting to write, don’t write”. “You want to build a network “. “Start sharing and teaching”.

PR “PR is a pretty hard thing to outsource”.

Support “Everyone [at FreshBooks] does support”.”When you are building a business, you are building a culture”.”Post your phone number” .”Use a forum, use twitter, be everywhere”.

Usability “If you want to be humbled, watch somebody use your application.  You will be floored, you will be shamed”.

Surveys “If you want stats, use an online survey, if you want insights, call people”.

Decisions “You are editor/curator”. “You can’t please all people all the time”. “If you do support, you’ll know”.”Remove the pain, stay true to vision”.

Funding “You need to know your formula before investing capital at the top”.”When you know your users better than anyone.  When you have a formula for the $.  When you’ve got traction.  When you know your market size”. “Angels do 27 times more investing than VCs”. “Can you increase your share price by more than dilution?”.

Competition “I never think about competition”.” I don’t believe customer service is going to get out of style”.”If you only look at your competitors, you are only reacting, not leading”.

Startups “You do stuff in a way a big company cannot”.

Written by Compliant IA

June 16, 2009 at 9:32 pm

Relational Databases Under Fire

with 4 comments

There is a certain irony to this post.  It’s a bit like a car salesman trying to sell you a bicycle.  My career so far has largely revolved around relational databases.  That is slowing changing however as new storage mechanisms and models emerge and demonstrate they are better suited to certain requirements.  I discuss a number of them here.

1. Distributed file systems.  DFS, out of the box, scale well beyond the capabilities of relational databases.  Hadoop is an open-source distributed file system inspired by Google’s BigTable.  Hadoop also implements MapReduce, a distributed computing layer on top of the file system.

2. Enterprise search servers.  The biggest eye opener in recent years (which we implemented for a public library’s “social” catalogue) has to be Solr.  Solr is based on Lucene and also integrates with Hadoop.  Already in widespread use, this product is poised to gain further adoption as more organizations seek to expose their data (including social data) to the world through searches.  The speed and features of Solr alone sell search servers better than I ever could and quite simply leave relational databases in the dust.

3. RDF stores.  While relational databases are governed by an overarching schema and excel at one-to-many relationships, RDF stores are capable of storing disparate data and excel at many-to-many relationships.  Open source products include Jena and Sesame.  Unfortunately, at the present time, the performance of RDF stores falls well short of relational databases for one-to-many data (most typical in enterprise databases) making its widespread enterprise adoption a long shot.

4. Web databases like this recent (and very quiet) Google announcement on Fusion Tables.  While functionally and programmatically limited compared to other stores, the Google product focuses on rapid correlation and visualization of data.  A product to watch.

Seismic shift in data storage?  Not quite.  But an evolution is certainly under way.  Relational databases are in widespread use.  They are highly capable at storing data  and data relationships, scale reasonably well and are economical for the most part.  Relational databases are not going away.  But the once dominant technology is being challenged by other models that are more capable, more efficient and/or more economical at handling certain tasks.  By evaluating these technologies against your organization’s needs, you may find surprising answers and ROI.

Written by Compliant IA

June 12, 2009 at 9:04 am

Semantic Technologies will Rise from the Limitations of Relational Databases and the Help of Distributed File Systems

with 4 comments

As an architect of large enterprise systems, I look to the Semantic Web with envy and anticipation.   And yet, the more I look into the potential of semantic technologies, the more I realize semantics are victims of the success of the very technologies they are trying to replace. The semantic web is a network of global relations.  Semantic content is not bound by a single database schema, it represents globally linked data.  However as an expert in database modelling and database-backed systems, I am forced to concede that, for the purpose of each enterprise, a relational database governed by rules (schema) mostly internal to the organization and serving a certain functional purpose, is often all that’s needed.  Semantics are to a large extent, a solution in need of a problem.  And yet I am a strong believer in a semantic future, but not for reasons pertaining to semantics per se.   While actual numbers vary by database vendor, installation and infrastructure, relational databases are inherently limited in how much data they can store, query and aggregate efficiently.  Millions yes, billions no.  The world’s largest web properties don’t use relational databases for primary storage, they use distributed file systems.  Inspired by Google’s famous Big Table file system, Hadoop is an open-source free distributed file system.  It currently supports 2,000 nodes (servers) and, coupled with MapReduce, allows complete abstraction of hardware across a large array of servers, assured failover and distributed computing.  While 2,000 servers seems like a lot, even for large enterprise, I am amazed how many enterprise clients and partners are dealing with ever increasing datasets that challenge what relational databases were designed for.  Why does this matter?  When dealing with millions of files, billions of “facts” on a distributed file system, semantic technologies start making a lot of sense.  In fact dealing with universally marked loose content is precisely what semantic technologies were engineered to address.  And so I am hopeful.  Not that semantic technologies will prevail because of some inherent advantage but that the future points to gigantic datasets of disparate origins, ill suited conceptually and technically to be handled by relational databases.  It’s not that semantic technologies are better, it’s that they are better suited for the times ahead.

Written by Compliant IA

June 3, 2009 at 10:04 pm

Fee-Based APIs Are Coming (It’s a Good Thing!)

leave a comment »

While Google has captured an overwhelming share of the search market by combining relevance with simplicity and speed, capitalizing on Google’s data to build business applications hasn’t been easy.  To this day, while you can buy a license for Google apps, maps and other offerings, the terms of use of the core search engine remain restrictive for B2B use.  In no uncertain terms, the terms of use state “The implementation of the Service on your Property must be generally accessible to users without charge and must not require a fee-based subscription or other fee-based restricted access”.    While this doesn’t rule out commercial ventures per se, it does rule out fee-based systems.  Ad-based systems are inappropriate for most B2B applications delivering the type of value-adding service that a corporate client typically expects to pay for, without ads and other distractions.

Why would a login-protected SaaS business application want to search Google?  The web is the largest collection of human knowledge ever assembled.  It’s also slowly being re-engineered semantically as a giant global database.  Thus opportunities abound for businesses to systematically mine the web and provide value-adding services on top of web-sourced data.  So why isn’t Google opening up its API to B2B use?  Google may be a search engine by function, it’s an advertising company by revenue.    Google doesn’t make money crawling the web, its  revenue is primarily generated by Sponsored Links.    Since ads don’t mesh well with API-sourced data (typically returned in a non-human readable format such as XML or JSON), Google doesn’t have much to gain by giving it away.

This post would end on a rather pessimistic note if it weren’t for the wonders of competition.   Being a distant second in the search market, and no longer the centre of attention, Yahoo has been quietly but relentlessly pushing the envelope lately.  They supported microformats long before Google did.  They also announced fee-based use of their BOSS Search API starting this year.  This is great news for two reasons.  Firstly, the fee eliminates the restriction to ad-based systems.  Secondly, the fee comes with assurances: response time guarantees, continued investment and support, as well as no usage limits.

Search engines and semantics are increasingly the “glue” of the Internet, a global repository of information which is starting to look more and more like a database (albeit one with no overarching schema).  Fee-based APIs enable an ecosystem of value-adding niche B2B players to mine, transform and add value to web-sourced data.  I hope other web properties follow Yahoo’s lead and open their API, for a fee, to B2B use.

Written by Compliant IA

May 28, 2009 at 9:32 pm

Helping Machines Read, A Simple Microformat Case Study

with 2 comments

I recently made Betterdot’s Contact Us page both human and machine readable by adding hCard microformat markup to the underlying XHTML.  This notion of “machine readable” content is arguably abstract and somewhat obscure however.  What do we mean?  What do machines see?  Perhaps a picture (or three) are worth the proverbial 1,000 words.

When a human reader, using a web browser, looks at the page, he or she sees this:

Contact page, as seen by human readers

Contact page, as seen by human readers

 

Without semantic markups such as the hCard microformat markup, a machine (for example a Google bot crawling the Betterdot site for indexing) sees this:

Contact page as seen by machines (no microformat markup)

Contact page as seen by machines (no microformat markup)

 

With semantic markups such as the hCard microformat markup, the same machine or bot sees this:

Contact page, as seen by machines with microformat markup

Contact page, as seen by machines with microformat markup

 

In Layman’s terms, microformats help machine “read” data marked up with microformat tags on the page.   While “reading” falls short of true semantic “understanding”, microformats are certainly a step in the right direction.

Written by Compliant IA

May 19, 2009 at 10:12 pm

The Road to the Semantic Web is Paved with Microformats

with 4 comments

microformatsGoogle recently and quietly announced something huge, “rich snippets”.   Rich snippets are smart previews, displayed right on a search results page.   While Google has long relied on snippets to attach a bit of information to each link (thus letting the user know what he or she might expect on each page represented by a link), rich snippets go a step further: they extract key characteristic of the page, be it a rating of a review or a person’s contact information.    Google doesn’t have to guess it, it knows it.  Google’s rich snippets are powered by microformats and RDFa, two semantic standards that are rapidly gaining adoption.   Google’s implementation allows semantically-marked web content (such as reviews and contact information) to be exposed, aggregated and averaged  in a Google search results page.  In short, after years in the lab, the web is at last, albeit quietly, becoming semantic!  

Microformats are not a substitute for the semantic web, they are a stepping stone and a very important one.  They demonstrate the feasibility and value of adding semantic meaning to web page content.   They do so using existing browsers and standards.  They do so today, in the field not in the lab.  By making web pages understandable to both humans (also known as readers…)  and machines, using current technologies, current browsers and minimal effort, microformats allow web content to be reliably understood and aggregated by search engines.   The future is bright.  Google could, for example, calculate an average review for a book from a list of semantically compliant sites.  Google could also uniquely identify a user as a single human being across sites.   The semantic web, a web of meaning, is finally taking shape.

I am convinced the semantic web is going to change the way we publish content, exchange, correlate and aggregate information, both in the public domain and the enterprise.   It’s an exciting time for web professionals who can look forward to building companies and next generation systems that leverage semantic data.

toronto_semantic

In Toronto and interested in the semantic web?  Join us at the Toronto Semantic Web group on LinkedIn.

Written by Compliant IA

May 15, 2009 at 5:14 pm

Evangelists, The Semantic Web Needs You!

with 2 comments

need_you_medium1

First a confession.  What started as a curiosity, has turned into a bit of an obsession.   Artificial intelligence, natural language processing, data interchange, global ontologies, it’s all there in the semantic web.   There is enough in there to excite the geek in me for three life times and perhaps there lies the bigger problem… Let me take a step back. In broad terms, the semantic web refers to a global web of unequivocal meaning, that can be used and queried by machines, programs and ultimately user-facing applications.  In equally broad terms, this amounts to turning loose data (words on a page, with no meaning other than their proximity to other words which can be counted, similarities inferred, etc…) on the web into information (meaning, purpose and inter-operability).  Micro-formats asides, words like ISBN or UPC on most web sites are just that, words.   They mean nothing, they are not tied to the same universal concept and the words that preceed or follow them (which usually is an actual ISBN or UPC code) are not linked to the same resource.  The web was built for people.  Please scan a page and quickly understand the purpose of the page and the meaning of captions, buttons and other elements on the page.  Machines don’t.   
The semantic web refers to a web (the largest collection of human knowledge ever assembled) understandable to machines.  Currently web pages are assembled to be read and understood by humans.  While tags and meta-data exist, these allone are generally insufficient to be used predictably and reliably by computer programs.  XML is in wide use arount the web but XML schemas (XML contracts which govern the structure and content of XML documents) are often attached to single documents, single services or single organizations.  And there lies the problem: without the semantic web, there doesn’t exists a single, universale way to refer to a person, or a UPC code, a financial service or a purchaseable item.  The fact that product A on site X and product A on site y are the same product is established by humans (by comparing brands, labels, model numbers, pictures), it often cannot be conclusively and reliably be determined by a computer program.  While search engines have bridged some this gap, short of a complete AI system, the information on the web will remain in data form (and turned into information by readers at page view time) until technologies like the semantic web become prevalent.  The semantic web, a term coined by sir Tim Berners Lee and spear-headed by the W3C, attached meaning to web page content so this content can be consumed, queried and indexed by machines.  From the largest collection of text in the world, the internet would be elevated to the largest collection of information, inter-related, meaningful. 
The semantic web is generally believed to be the next version of the web.  Whereas Werb 1.0 was basic publishing, Web 2.0 was social, Web 3.0 is expected to be semantic.   Yet for all the promises, it’s ascension remains clouded with doubts and hindered by real world impediments.   The semantic web is a technology of the future that for the time being has remained in the future.  Taxonomies, folksonomies (tags), meta-data and micro-formats are all small steps in the semantic direction.  Its rise, in time, is It is inevitable.
On paper, all the required building blocks are here.  Standards (W3C recommmendations) have been published, parsers are available and so are global open-source ontologies.  What’s missing?
The “social web” is largely being promoted and evangelized by online marketing professionals. Evangelists are tremendously important.  And yet the semantic web hasn’t made it easy for Web 2.0 professionals to ramp up on Web 3.0. There is substantial technical barrier of entry on the semantic web today.  Part of is is by design. The semantic web talks about schemas, objects and relationships.  It talks about machine language and parsers.  It is, by design, mostly “back-end”, conceptual and somewhat complex.  To succeed, the semantic web needs to leave the lab and the universities research department.  The semantic web has failed to make itself palatable to would-be evangelists.  It  needs a business plan, it evangelists, promoters.   It needs to reach out to the social web community.  It needs to inform and excite.  Why bother?  While the first phase of a semantic web ecosystem will most-likely be focused on the “back-end” (as web 1.0 was until the focus was put back on the user experience with Web 2.0), that would be followed by a new generation of user-centered web services, once again focused on the user experience and powered by semantic web data.  If you thought that the web did a lot today, imagine the capabilities of a web 4.0 front-end powered by a semantic-web back-end.  The potential is mind boggling.
This is *not* a web page. it’s 1 of 150K concepts from open-source ontology for semantic web (human readable format). http://bit.ly/tSBel

First, a confession.  What started as a curiosity, has turned into a bit of an obsession…  Artificial intelligence, natural language processing, data interchange, global ontologies are all, directly or indirectly, facets of the semantic web.   There is enough in there to excite the geek in me for three life times and there lies the problem… Let me take a step back.

In broad terms, the semantic web refers to a global web of unequivocal meaning, that can be used and queried by machines, programs and ultimately user-facing applications.  In equally broad terms, this amounts to turning loose data (words on a page, with no meaning other than their proximity to other words which can be counted, similarities inferred, etc…) into information (meaning, purpose and inter-operability).  Micro-formats asides, words like ISBN or UPC on most web sites are just that, words.   They mean nothing, they are not tied to the same universal concept and the words that precede or follow them (which usually is an actual ISBN or UPC code) are not linked to the same resource.  The web was built for people, not machines.  People scan a page and quickly understand the purpose of the page and the meaning of captions, buttons and other elements on the page.   On the other hand, the semantic web refers to a collection (the web is the largest collection of human knowledge ever assembled) understandable to machines. While user-generated tags and meta-data exist, these alone are generally insufficient to be used predictably and reliably by computer programs.  XML is widely used around the web but XML schemas (XML contracts which govern the structure and content of XML documents) are often attached to a single document, a single service or a single organization.  This point alone gets to the root of the problem: without the semantic web, there doesn’t exist a single, universally accepted way of specifying a person, a UPC code, a financial service or a purchaseable item.  The fact that product “A” on site x and product “A” on site y are the same product is established by humans (by comparing brands, labels, model numbers, pictures), it cannot be conclusively and reliably determined by a computer program.  Lastly, while search engines have bridged this gap somewhat, short of a complete Artificial Intelligence system, the information on the web will remain in unstructured data form until technologies like the semantic web become prevalent.  In conclusion, the semantic web, a term coined by Sir Tim Berners-Lee and spearheaded by the W3C, seeks to attach meaning to page content so this content can be consumed, queried and inter-related by machines.  From the largest collection of text in the world, the internet would be elevated to the largest collection of inter-related, meaningful information in the world

The semantic web is generally believed to be the next version of the web.  Whereas Web 1.0 was about basic publishing, Web 2.0 is social, Web 3.0 is expected to be semantic.   Yet for all the promises, its ascension remains clouded with doubts and hindered by real world impediments.   The semantic web is a technology of the future that, until now, has remained in the future.  On paper, all the required building blocks are here.  Standards (W3C recommendations) have been published, parsers, query-engines and core-technologies are available and so are global open-source ontologies.  What’s missing?

The “social web” is largely being promoted and evangelized by a combination of online marketing and user-experience professionals. Evangelists are tremendously important in spreading the word and encouraging adoption.  On the Toronto scene, Web 2.0 evangelists like David Crow, Matthew Milan and Saul Colt come to mind.  And yet the semantic web community hasn’t really reached out to Web 2.0 professionals in general. The conversation mostly revolves around the back-end, infrastructure and core technologies. The semantic web talks about schemas, objects and relationships.  It talks about machine languages and parsers.  It does not directly address the user experience (although its ultimate goal is just that).   To succeed, the semantic web needs to leave the lab and the research department.  It needs to make itself palatable to early adopters and would-be evangelists.  It  needs a business plan,  promoters and supporters.   It needs to reach out, inform and excite the web 2.0 community.  Why bother?  While the first iteration of a semantic ecosystem will most likely focus on the “back-end” (similar to back-end-centered Web 1.0 followed by user-centered Web 2.0),  this will likely be followed by a second iteration of user-centered services, heavily skewed on the user experience and powered by semantic web data.  While the web does a lot today, imagine the capabilities of a web 4.0 front-end powered by a semantic web back-end.  The potential is mind boggling.  Let’s go semantic, if you catch my meaning 😉

Resources:

W3C semantic web homepage: http://www.w3.org/2001/sw/

Wikipedia on semantic web: http://en.wikipedia.org/wiki/Semantic_Web

Sample concept from open-source ontology for semantic web (in human readable format): http://sw.opencyc.org/concept/Mx4rvVi1AJwpEbGdrcN5Y29ycA

Open source (created by HP, java-based) semantic web toolkit: http://jena.sourceforge.net

toronto_semantic

In Toronto and interested in the semantic web?  Join us at the Toronto Semantic Web group on LinkedIn.

Written by Compliant IA

May 7, 2009 at 6:05 pm

10 Twitter Tips for Professionals

with 2 comments

I am an unlikely fan of Twitter, the rapidly growing “micro-blogging” platform (I won’t call it a site, read on…).  For starters, I don’t particularly enjoy gossip.  I have no interest in celebrities and I think Smalltalk is a computer language.  So like many, I hesitated to join Twitter.  I was afraid it would amount to pointless chatter, noise.  That was then.  This is now: in a matter of weeks, Twitter has not only become useful to me, it has become downright essential.  Here are 10 Twitter tips I hope professionals find useful.

1. Twitter is a bit like eavesdropping.  The conversation is as good as the participants.  Follow interesting people, creative thinkers, prominent speakers and chances are you are going to be enlighted by a constant flow of insightful tweets.  Follow “noise” and the pearls of wisdom will be few and far between.

2. Follow your friends and peers, sure.  But mostly, seek out people you wouldn’t normally get to converse with.   Unlike LinkedIn, you can virtually follow anyone.  This is unique.  Following someone on Twitter is a bit like being allowed in his or her inner, albeit public, circle.  Twitter has given me new perspectives from people I may not otherwise meet, listen to or learn from on a day to day basis.

3. Everything you say is public.  The search engine in Twitter is very good and real time. The appearance of “inner circle” privacy is just that, an appearance.  Be candid (most people are) but tweet accordingly.

4. Being a fairly public and open platform, Twitter is very transparent.  You can search Twitter (company name,  person, idea) using #hashtags.  This gives you a pretty good idea of  how the company or idea is being perceived.   Real time, unfiltered knowledge.  Brilliant for marketers, researchers and just about anyone involved in creating and selling a product or service.

5. Tele-presence.  There is a great conference in San Francisco you wish to attend but can’t due to prior commitments.  No worries.  Lookup the hashtag and “listen” for tweets on the conference.  Key points and take-aways will probably make it on Twitter before they show up anywhere else.  Not quite like being there, but close.

6. Twitter is a platform, more than a site.  The web interface is one of many ways to get on Twitter.  I installed a desktop client called TweetDeck and a BlackBerry client called TwitterBerry.  There are countless other clients which is further driving its adoption.  And there lies a valuable take-away on success 2.0: play nice with the community and the community will adopt you and make you successful.  

7. Twitter is not just about people, it’s about news.  I essentially stopped using RSS and now use Twitter to read updates from some favourite technical news sites such as Slashdot.  There again, Twitter is a platform more than an application.  Its potential is enormous.

8. Twitter can get your questions answered.  Sure LinkedIn has Q&A’s but the answers take days or weeks to come.  Answers on LinkedIn tend to be longer and well thought-out (some anyways) but they still take time.  Chances are, unless you are writing a research paper, you need answers at the speed of business.  The answers you will get on Twitter are more like insights, facets to the complete answer.  Quick, opinionated, maybe a follow up link or two.   From there you can make your own opinion.

9. Twitter restricts you to 140 characters.  What good can you say in 140 characters?  A lot!  Twitter forces you be concise, synthetic, to the point.  As a writer, it’s a good exercise in concision. As a reader, it’s a great time saver that stimulates the mind.

10. Remember Laurence Fishburne as Morpheus in the Matrix “No one can tell you what The Matrix is, you simply have to experience it for yourself.”?  Well so is Twitter.  Because of its openness, its choice of interface and who you follow, Twitter is what you want it to be.  Try it and you just might like it.

Follow Fabien on Twitter at http://twitter.com/FabienTiburce

Written by Compliant IA

April 26, 2009 at 9:04 pm

Custom Software, Executive Q & A

leave a comment »

This post aims to answer some of the questions we frequently get from executives on what we do, the business and process behind information technology in general and software in particular.  First a preamble.  I don’t expect an executive to understand the intricacies and details of what we do as software engineers and consultants.  My job is to understand what an executive requires, what “pain points” might exist in the operation of the business, what opportunities might lie ahead and to devise and implement solutions through information technology.  My  job is to understand and communicate the nature of the solution, scope it, price it, build it and integrate it.  Our primary expertise is software, more specifically custom software.

What is custom software?

You can buy “off-the-shelf” software.  Software of this type is often, quite literally, available on a shelf at a computer or electronics store. Other times it is downloaded or procured from a commercial or open source vendor.  Most people are familiar with this type of  software because of the ubiquitous availability of some well known off-the-shelf software.  If you have used Microsoft Office, you have used off-the-shelf software.  Custom software is purpose built, or rather purpose “assembled” from readily available and custom-built libraries.  You don’t buy or download it ready-made.  

Is custom software built from scratch?  

Not at all.  Today’s application development is more accurately described as application “assembly”.  Architects and developers combine readily available libraries and components to meet the business and functional requirements of the system and the needs of the organization.  The widespread availability of these (often open-source) components has created a new breed of software development, one that relies on rapid prototyping and frequent iterations.  Good developers don’t reinvent the wheel.  They use tried and true readily available components, libraries and best practices.  They don’t make, they assemble.

Why do I need custom software, can’t I customize off-the-shelf software?  

It does depend on the software but in the vast majority of cases, you can, to some degree.  Appearances can be deceiving however.  Making changes to a large one-size-fits-all software application or platform can often be more expensive than purposefully assembling an application from loose components. The economics of “buy vs build” hinge on the nature of the application.  This is why neither should be a foregone conclusion.  Always start with your business and functional requirements, initially ignoring what you think is doable, perceived costs and complexity.    

Brand “X” off-the-shelf software does 80% of what I need.  How expensive will it be to build the remaining 20%?  

As I said above,  I can’t quite answer this question for each and every situation without further analysis.  But I can say this with absolute certainty: a lot longer than you could ever imagine and often a lot longer than the vendor is willing to admit.  Nowhere does the 80/20 rule apply more so than in systems.   You will meet 80% of your requirements in 20% of the time and budget.  Commercial vendors know this and are quick to sell you those features that come to mind.  Don’t assume what you didn’t see is easy to get; it isn’t.  The remainder will be expensive and difficult because by design, off-the-shelf software is meant to fit most organizations’ needs, not yours specifically.  Custom built software has a more predictable and linear complexity curve.  While not all features are equal in complexity and scope, building custom software has few or no limitations.  Any experienced professional can accurately scope and estimate the time and costs involved in building the features needed.

How do I kick-start a software project?

Every software project needs a mandate.  Software exists to serve a business and functional purpose.  Elliciting requirements is a job fit for professionals.  Any good software consulting organization will put forth experienced individuals in this area.  They will meet your stakeholders, will interview current and future users, will seek to understand the current business and functional processes the new piece of software is meant to support, alleviate or replace.  From this process, a list of mandatory business and functional requirements emerge.  Be specific and get everything in writing.  Upon delivery, the software will be passed by user acceptance.  User acceptance ensures that the system meets all requirements stated and is fit for deployment.  

How do I measure success on a software project?

Sofware should be easy to use.  What goes into usability is open for debate but the outcome isn’t.  Are your users productive?  Do they (the people actually using the software, be it your employees or clients) find it easy to use?  Have previously difficult and time consuming tasks become easier and faster?  Is the software intuitive?  Does it lend itself to experienced users and novices alike (a difficult balance by the way)?.  Usability is important.  Make sure you work with people who read, think and speak usability.  There are other facets, but this one cannot be overlooked.

Software needs to be fast.  Give the most patient person in the world a web browser and make him wait 4 seconds and you have a frustrated irate user.  Rightly so.  Software needs to be fast.  People are used to thinking fast.  Customers demand highly responsive interactions or they move on.  Fast requires proper software engineering and infrastructure.  Don’t assume any piece of software can scale.  That is simply not true.  Principles of scalability must be embedded in the application itself.   We listen to every word Google, You Tube and Facebook software engineers have to say because scalability is very much a science that relies on software patterns, design and infrastructure decisions.  You may not be as large as Google but scaling down is easier than scaling up.   In this regards, there is absolutely no substitute for experience.  Don’t hire an organization that hasn’t built something comparable in size or scope.  They will learn on the job, they won’t meet your expectations and you will miss your target.  Software engineers are worth every penny you pay them.  Expensive?  Just adopt the agile methodology and ensure most of your dollars are going towards the end product not superfluous management (not that management is superfluous but in agile development, extra process can in fact be detrimental). 

Software must be easy to change.  If I had to pick one symptom of poorly engineered software I would say, without a doubt, a pattern of “I asked how long it would take to make small change X and they said it would be Y weeks”.  The truth is not all software is created equal.  Good software is what we call “declarative”.  It can be changed easily because only key functions are “hard” coded, the interactions between code modules and functions to actually create processes are “soft” coded, typically in XML or configuration files.  If your vendor consistently tells you it will take days and weeks to do simple things, they may in fact be honest but (regrettably) incompetent.  Talk to a vendor’s existing or previous clients.  Was the software written on time?  Did it perform?  Were changes easily accommodated?  If any of these answers is negative, move on.  

Can my IT department write software?

Some can.  However most IT departments are usually barely keeping up with the ongoing needs of the business.   Freeing up resources to write and integrate complex software is often prohibitive.  Another angle is while some IT departments have in house talent able to write software, writing enterprise software is complex and very much a profession in itself.  Technical skills and development methodologies are taxing and time consuming to learn and master.  A little knowledge can indeed be a dangerous thing.  If a system is mission critical and/or will affect your bottom line, leave it to people who do nothing but software development.

How do I choose a vendor?

In operational and logistical areas, the size of the vendor is often proportional to the size of the project.  Software is  different however.   Software scales, not according to the number of people on the team but according to the experience of the engineers who architected it.  I once worked for a large consulting organization.  The pitch to your new clients was often the same: at the first meeting, they’d bring out the “stars”, the experts.  The client was wowed.  Clearly this is money well spent they felt.  Unfortunately, the contract would be signed, the stars would disappear, never to be seen again, and the client would be stuck with a team of recently hired “B” developers.  Projects at these large consulting houses go notoriously wrong and over budget.  Not to say that there isn’t a place for them.  But when working with software, get to know who you are working with.  And keep in mind that small teams do great things.  

What vendor would you recommend?

I thought you’d never ask!  We at Betterdot Systems practice what we preach.  We’re a small company of ultra-motivated highly-experienced software professionals who do great things.  Speaking with us is not cheap.  It’s free.  We want to understand your business and your needs before commitments are made or sought.  There are other vendors out there.  In fact if we feel that your requirements don’t fit our expertise and skill set, we’ll happily recommend a few.  Speak with your peers and ask them about their experience with software vendors.  And as I mentioned above, ask to speak with a vendor’s clients.  A good vendor has happy clients.  Happy clients are willing to talk.

25 Years Online. Do I Get a Balloon?

leave a comment »

MinitelI was 12 years old in 1984 when my parents brought home a Minitel.  It didn’t cost them anything.  The state-owned phone company (PTT who later became France Télécom) figured it costs 100 francs to print a phone book per year and 500 francs to manufacture a Minitel.  Hoping to recoup their investment in 5 years, they gave away 9 million terminals.  Phone books went the way of the dodo and a country, perhaps not realizing it, entered the electronic age.  France Télécom did a lot more than recoup their investment.  Private services flourished driving network fee revenues.  By the end of 1999, the network had 25 million users in France.  The Minitel is generally considered the world’s most successful pre-internet online service.  The network was secure (private phone company network) allowing online banking and electronic commerce to take off.  Booking train tickets, buying from electronic catalogues and looking up online databases were in widespread use by the late 80’s.  None of that really mattered to me at the time, I had discovered chat rooms and was hooked (did I mention the per minute network fees?).  The Minitel is a technological relic by today’s standards and yet won’t completely go away.  There are internet gateways for it now and it’s still commonly used for secure enterprise applications requiring a private network (which can easily be handled by TCP/IP technologies, virtual private networks, etc…).  Anyways, upon realizing I had been online for 25 years I proudly told my wife.  Now this may surprise you as it surprised me, but from the look on her face, I doubt I am getting a balloon…;)

Written by Compliant IA

April 15, 2009 at 3:59 am