For many years IT Operations had been put in the situation to have to remediate service outages with only basic knowledge and basic support from the Dev teams. Additionally, the broad variety of technologies deployed in a typical IT infrastructure (physical and virtual resources, applications, etc.) has become a huge barrier towards the adoption of a highly integrated approach to building IT infrastructures. For these two reasons, for many years IT Operations had been forced to build and operate the IT infrastructure using some fairly basic tooling.
I’d like to share with you some of my thoughts about the challenges that Enterprises are facing when attempting to leverage the SaaS service model. As always, your comments are welcome!
- Data Access
It’s no brainier that, among other goals, any Cloud Service Provider wants to excel in at least three key areas:
- Quality of service – delivering best qualify of service, for example stability, predictable performance, availability, etc.
- Customer experience – the service is easy to use, there are many ways to consume it (e.g. tablet, laptop, etc.), great customer training and support is available, etc.
- Competitive offering – it can be a combination of features, pricing, niche use cases, etc.
Scale up (or vertically) means adding more resources to an existing component of a system. Adding more RAM and/or hard drives to a Hadoop DataNode is an example of scaling up the Hadoop cluster.
Scale out (or horizontally) means adding new components (or building blocks) to a system. Adding a new Hadoop DataNode to a Hadoop cluster is an example of scaling out the cluster.
Over the last months I’ve been having conversations with a lot of Hadoop users and developers. I’m glad to see that everyone wants to run Hadoop in production. Most of the practitioners also realize that, although Hadoop can scale, there are no clear guidelines that describe how to scale up/out Hadoop from very small to very large. Continue reading
Last week I joined my colleagues Rob Hirschfeld (RAH, @zehicle, http://robhirschfeld.com/) and Joseph George (JBG, @jbgeorge, http://jbgeorge.net/) in a conversation with the CloudCast crew talking about Dell’s leadership in two of the hottest technologies in the market today – OpenStack and Hadoop.
Second, here are some of the highlights from Episode 16: Dell, Dude you’re getting a cloud
Last week I had the privilege to go to the TDWI World Conference in San Diego. I went to talk about our Hadoop solution that we announced last week, meet with press analysts and get a good feel for what is going on in this space.
The Dell | Hadoop solution, offered in conjunction with Cloudera and called Dell | Cloudera Solution for Apache Hadoop, lowers the barrier to adoption for businesses looking to use Hadoop in production. Dell’s customer-centered approach is to create rapidly deployable and highly optimized end-to-end Hadoop solutions running on commodity hardware. Dell provides all the hardware and software components and resources to meet the customer’s requirements and no other supplier need be involved. Continue reading
Last Monday, July 25th, I gave a presentation at OSCON about the integration between Hadoop and the Enterprise Data Warehouse (EDW).
The session was well attended. I thought that the dialog and the exchange of opinions and ideas was very good. Few of the dialogs continued after the session which (once again!) indicates that managing Big Data is top of many practitioners’ mind.
I personally enjoyed the dialog and I also learned quite a few things! Thanks O’Reilly for another successful OSCON! Continue reading