Posts Tagged ‘distributed file system’
There is a certain irony to this post. It’s a bit like a car salesman trying to sell you a bicycle. My career so far has largely revolved around relational databases. That is slowing changing however as new storage mechanisms and models emerge and demonstrate they are better suited to certain requirements. I discuss a number of them here.
1. Distributed file systems. DFS, out of the box, scale well beyond the capabilities of relational databases. Hadoop is an open-source distributed file system inspired by Google’s BigTable. Hadoop also implements MapReduce, a distributed computing layer on top of the file system.
2. Enterprise search servers. The biggest eye opener in recent years (which we implemented for a public library’s “social” catalogue) has to be Solr. Solr is based on Lucene and also integrates with Hadoop. Already in widespread use, this product is poised to gain further adoption as more organizations seek to expose their data (including social data) to the world through searches. The speed and features of Solr alone sell search servers better than I ever could and quite simply leave relational databases in the dust.
3. RDF stores. While relational databases are governed by an overarching schema and excel at one-to-many relationships, RDF stores are capable of storing disparate data and excel at many-to-many relationships. Open source products include Jena and Sesame. Unfortunately, at the present time, the performance of RDF stores falls well short of relational databases for one-to-many data (most typical in enterprise databases) making its widespread enterprise adoption a long shot.
4. Web databases like this recent (and very quiet) Google announcement on Fusion Tables. While functionally and programmatically limited compared to other stores, the Google product focuses on rapid correlation and visualization of data. A product to watch.
Seismic shift in data storage? Not quite. But an evolution is certainly under way. Relational databases are in widespread use. They are highly capable at storing data and data relationships, scale reasonably well and are economical for the most part. Relational databases are not going away. But the once dominant technology is being challenged by other models that are more capable, more efficient and/or more economical at handling certain tasks. By evaluating these technologies against your organization’s needs, you may find surprising answers and ROI.
Semantic Technologies will Rise from the Limitations of Relational Databases and the Help of Distributed File Systems
As an architect of large enterprise systems, I look to the Semantic Web with envy and anticipation. And yet, the more I look into the potential of semantic technologies, the more I realize semantics are victims of the success of the very technologies they are trying to replace. The semantic web is a network of global relations. Semantic content is not bound by a single database schema, it represents globally linked data. However as an expert in database modelling and database-backed systems, I am forced to concede that, for the purpose of each enterprise, a relational database governed by rules (schema) mostly internal to the organization and serving a certain functional purpose, is often all that’s needed. Semantics are to a large extent, a solution in need of a problem. And yet I am a strong believer in a semantic future, but not for reasons pertaining to semantics per se. While actual numbers vary by database vendor, installation and infrastructure, relational databases are inherently limited in how much data they can store, query and aggregate efficiently. Millions yes, billions no. The world’s largest web properties don’t use relational databases for primary storage, they use distributed file systems. Inspired by Google’s famous Big Table file system, Hadoop is an open-source free distributed file system. It currently supports 2,000 nodes (servers) and, coupled with MapReduce, allows complete abstraction of hardware across a large array of servers, assured failover and distributed computing. While 2,000 servers seems like a lot, even for large enterprise, I am amazed how many enterprise clients and partners are dealing with ever increasing datasets that challenge what relational databases were designed for. Why does this matter? When dealing with millions of files, billions of “facts” on a distributed file system, semantic technologies start making a lot of sense. In fact dealing with universally marked loose content is precisely what semantic technologies were engineered to address. And so I am hopeful. Not that semantic technologies will prevail because of some inherent advantage but that the future points to gigantic datasets of disparate origins, ill suited conceptually and technically to be handled by relational databases. It’s not that semantic technologies are better, it’s that they are better suited for the times ahead.