This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/The-HP-Big-Data-Reference-Architecture-It-s-Worth-Taking-a/ba-p/179502#.VMfTrrHnb4Z
I recently posted a blog on the value that purpose-built products and solutions bring to the table, specifically around the HP ProLiant SL4540 and how it really steps up your game when it comes to big data, object storage, and other server based storage instances.
Last month, at the Discover event in Barcelona, we announced the revolutionary HP Big Data Reference Architecture – a major step forward in how we, as a community of users, do Hadoop and big data – and it is a stellar example of how purpose-built solutions can revolutionize how you accelerate IT technology, like big data. We’re proud that HP is leading the way in driving this new model of innovation, with the support and partnership of the leading voices in Hadoop today.
Here’s the quick version on what the HP Big Data Reference Architecture is all about:
Think about all the Hadoop clusters you’ve implemented in your environment – they could be pilot or production clusters, hosted by developer or business teams, and hosting a variety of applications. If you’re following standard Hadoop guidance, each instance is most likely a set of general purpose server nodes with local storage.
For example, your IT group may be running a 10 node Hadoop pilot on servers with local drives, your marketing team may have a 25 node Hadoop production cluster monitoring social media on similar servers with local drives, and perhaps similar for the web team tracking logs, the support team tracking customer cases, and sales projecting pipeline – each with their own set of compute + local storage instances.
There’s nothing wrong with that set up – It’s the standard configuration that most people use. And it works well.
Just imagine if we made a few tweaks to that architecture.
- What if we replaced the good-enough general purpose nodes, and replaced them with purpose-built nodes?
- For compute, what if we used HP Moonshot, which is purpose-built for maximum compute density and price performance?
- For storage, what if we used HP ProLiant SL4540, which is purpose-built for dense storage capacity, able to get over 3PB of capacity in a single rack?
- What if we took all the individual silos of storage, and aggregated them into a single volume using the purpose-built SL4540? This way all the individual compute nodes would be pinging a single volume of storage.
- And what if we ensured we were using some of the newer high speed Ethernet networking to interconnect the nodes?
Well, we did.
And the results are astounding.
While there is a very apparent cost benefit and easier management, there is a surprising bump in performance in terms of read and write.
It was a surprise to us in the labs, but we have validated it in a variety of test cases. It works, and it’s a big deal.
And Hadoop industry leaders agree.
“Apache Hadoop is evolving and it is important that the user and developer communities are included in how the IT infrastructure landscape is changing. As the leader in driving innovation of the Hadoop platform across the industry, Cloudera is working with and across the technology industry to enable organizations to derive business value from all of their data. We continue to extend our partnership with HP to provide our customers with an array of platform options for their enterprise data hub deployments. Customers today can choose to run Cloudera on several HP solutions, including the ultra-dense HP Moonshot, purpose-built HP ProLiant SL4540, and work-horse HP Proliant DL servers. Together, Cloudera and HP are collaborating on enabling customers to run Cloudera on the HP Big Data architecture, which will provide even more choice to organizations and allow them the flexibility to deploy an enterprise data hub on both traditional and newer infrastructure solutions.” – Tim Stevens, VP Business and Corporate Development, Cloudera
“We are pleased to work closely with HP to enable our joint customers’ journey towards their data lake with the HP Big Data Architecture. Through joint engineering with HP and our work within the Apache Hadoop community, HP customers will be able to take advantage of the latest innovations from the Hadoop community and the additional infrastructure flexibility and optimization of the HP Big Data Architecture.” – Mitch Ferguson, VP Corporate Business Development, Hortonworks
And this is just a sample of what HP is doing to think about “what’s next” when it comes to your IT architecture, Hadoop, and broader big data. There’s more that we’re working on to make your IT run better, and to lead the communities to improved experience with data.
If you’re just now considering a Hadoop implementation or if you’re deep into your journey with Hadoop, you really need to check into this, so here’s what you can do:
- my pal, Greg Battas posted on the new architecture and goes technically deep into it, so give his blog a read to learn more about the details.
- Hortonworks has also weighed in with their own blog.
If you’d like to learn more, you can check out the new published reference architectures that follow this design featuring HP Moonshot and ProLiant SL4540:
- HP Big Data Reference Architecture: Cloudera Enterprise reference architecture implementation
- HP Big Data Reference Architecture: Hortonworks Data Platform reference architecture implementation
If you’re looking for even more information, reach out to your HP rep and mention the HP Big Data Reference Architecture. They can connect you with the right folks to have a deeper conversation on what’s new and innovative with HP, Hadoop, and big data. And, the fun is just getting started – stay tuned for more!
Until next time,
This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/Purpose-Built-Solutions-Make-a-Big-Difference-In-Extracting-Data/ba-p/173222#.VEUdYrEo70c
Indulge me as I flash back to the summer of 2012 at the Aquatics Center in London, England – it’s the Summer Olympics, where some of the world’s top swimmers, representing a host of nations, are about to kick off the Men’s 100m Freestyle swimming competition. The starter gun fires, and the athletes give it their all in a heated head to head match for the gold.
And the results of the race are astounding: USA’s Nathan Adrian took the gold medal with a time of 47.52 seconds, with Australia’s James Magnussen finishing a mere 0.01 seconds later to claim the silver medal! It was an incredible display of competition, and a real testament to power of the human spirit.
For an event demanding such precise timing, we can only assume that very sensitive and highly calibrated measuring devices were used to capture accurate results. And it’s a good thing they did – fractions of a second separated first and second place.
Now, you and I have both measured time before – we’ve checked our watches to see how long it has been since the workday started, we’ve used our cell phones to see how long we’ve been on the phone, and so on. It got the job done. Surely the Olympic judges at the 2012 Men’s 100m Freestyle had some of these less precise options available – why didn’t they just simply huddle around one of their wrist watches to determine the winner of the gold, silver and bronze?
OK, I am clearly taking this analogy to a silly extent to make a point.
When you get serious about something, you have to step up your game and secure the tools you need to ensure the job gets done properly.
There is a real science behind using purpose-built tools to solve complex challenges, and the same is true with IT challenges, such as those addressed with big data / scale out storage. There are a variety of infrastructure options to deal with the staggering amounts of data, but there are very few purpose built server solutions like HP’s ProLiant SL4500 product – a server solution built SPECIFCIALLY for big data and scale out storage.
The HP ProLiant SL4500 was built to handle your data. Period.
- It provides an unprecedented drive capacity with over THREE PB in a single rack
- It delivers scalable performance across multiple drive technologies like SSD, SAS or SATA
- It provides significant energy savings with shared cooling and power and reduced complexity with fewer cables
- It offers flexible configurations •A 1-node, 60 large form factor drive configuration, perfect for large scale object storage with software vendors like Cleversafe and Scality, or with open source projects like OpenStack Swift and Ceph
- A 2-node, 25 drive per node configuration, ideal for running Microsoft Exchange
- A 3-node, 15 drive per node configuration, optimal for running Hadoop and analytics applications
If you’re serious about big data and scale out storage, it’s time to considering stepping up your game with the SL4500. Purpose-built makes a difference, and the SL4500 was purpose-built to help you make sense of your data.
And if you’re here at Hadoop World this week, come on by the HP booth – we’d love to chat about how we can help solve your data challenges with SL4500 based solutions.
Until next time,
This is a duplicate of a blog I posted on del.ly/60119gex.
This week, those of us on the OpenStack and Red Hat OpenStack teams are partying like its 1999! (For those of you who don’t get that reference, read this first.)
Let me provide some context…
In 1999, when Linux was still in the early days of being adopted broadly by the enterprise (similar to an open source cloud project we all know), Dell and Red Hat joined forces to bring the power of Linux to the mainstream enterprise space.
Fast forward to today, and we see some interesting facts:
- Red Hat has become the world’s first billion dollar open source company
- 1 out of every 5 servers sold annually runs Linux
- Enterprise’s view of open source is far more receptive than in the past
So today – Dell and Red Hat are doing it again: this time with OpenStack.
Today, we announce the availability of the Dell Red Hat Cloud Solution, Powered by Red Hat Enterprise Linux OpenStack Platform – a hardware + software + services solution focused on enabling the broader mainstream market with the ability to run and operate OpenStack backed by Dell and Red Hat. This is a hardened architecture, a validated distribution of OpenStack, additional software, and services / support to get you going and keep you going, and lets you:
- Accelerate your time to value with jointly engineered open, flexible components and purpose engineered configurations to maximize choice and eliminate lock-in
- Expand on your own terms with open, modular architectures and stable OpenStack technologies that can scale out to meet your evolving IT and business needs
- Embrace agility with open compute, storage, and networking technologies to transform your application development, delivery, and management
- Provide leadership to your organization with new agile, open IT services capable of massive scalability to meet dynamic business demands
Here is one more data point to consider – Dell’s IT organization is using the RHEL OpenStack Platform as a foundational element for incubating new technologies with a self-service cloud infrastructure. Now, that is pretty strong statement about how an OpenStack cloud can help IT drive innovation in a global scale organization.
At the end of the day, both Dell and Red Hat are committed to getting OpenStack to the enterprise with the right level of certification, validation, training, and support.
We’ve done it before with RHEL, and we’re going to do it again with OpenStack.
Until next time,
Hello from sunny, Santa Clara!
My team and I are here at the BIG DATA INNOVATION SUMMIT representing Dell (the company I work for), and it’s been a great day one.
I just wanted to take a few minutes to jot down some interesting ideas I heard today:
- In Daniel Austin’s keynote, he addressed that the “Internet of things” should really be the “individual network of things” – highlighting that the number of devices, their connectivity, their availability, and their partitioning is what will be key in the future.
- One data point that also came out of Daniel’s talk – every person is predicted to generate 20 PETABYTES of data over the course of a lifetime!
- Juan Lavista of Bing hit on a number of key myths around big data:
- the most important part of big data is its size
- to do big data, all you need is Hadoop
- with big data, theory is no longer needed
- data scientists are always right :)
QUOTE OF THE DAY: “Correlation does not yield causation.” – Juan Lavista (Bing)
- Anthony Scriffignano was quick to admonish the audience that “it’s not just about data, it’s not just about the math… [data] relationships matter.”
- The state of Utah state government is taking a very progressive view to areas that analytics can help drive efficiency in at that level – census data use, welfare system fraud, etc. And it appears Utah is taking a leadership position in doing so.
I also had the privilege of moderating a panel on the topic of the convergence between HPC and the big data spaces, with representatives on the panel from Dell (Armando Acosta), Intel (Brent Gorda), and the Texas Advanced Computing Center (Niall Gaffney). Some great discussion about the connections between the two, plus tech talk on the Lustre plug-in and the SLURM resource management project.
Additionally, Dell product strategists Sanjeet Singh and Joey Jablonski presented on a number of real user implementations of big data and analytics technologies – from university student retention projects to building a true centralized, enterprise data hub. Extremely informative.
All in all, a great day one!
If you’re out here, stop by and visit us at the Dell booth. We’ll be showcasing our hadoop and big data solutions, as well as some of the analytics capabilities we offer.
(We’ll also be giving away a Dell tablet on Thursday at 1:30, so be sure to get entered into the drawing early.)
Stay tuned, and I’ll drop another update tomorrow.
Until next time,
This is a repost of my blog at http://dell.to/1euUcqk.
In this short interview with Inc., Michael Dell provides an overview of the company’s transformation into a leading player in the “data economy.”
As Michael notes, with the costs of collecting data decreasing, more companies in a growing number of industries are making better use of existing data sources, and gathering data from new sources.
And that’s where Dell has been enabling customers for years with solutions built with technologies like Hadoop and NoSql. Helping companies and organizations make better use of this data, and assisting them in using it to solve their challenges, are just a few of the ways Dell has changed the Big Data conversation, and built an entirely new enterprise business along the way.
As a member of the Technology CEO Council, Michael also recently joined other tech CEOs to discuss the data economy with policy makers. As an example of the potential of the data economy, he explained how Dell’s growing health information technology practice includes 7 billion medical images. These images are in an aggregated data set allowing researchers to mine them for patterns and predictive analytics.
“There’s lots that can be done with this data that was very, very siloed in the past,” Michael toldComputerworld, “We’re really just kind of scratching the surface.”
It’s certainly an exciting time to be at Dell – and the data revolution continues!
What is your relationship to OpenStack, and why is its success important to you? What would you say is your biggest contribution to OpenStack’s success to date?
I believe OpenStack represents a trend that service providers and enterprise IT are making to deeper community collaboration on new technologies and practices, and I will continue to drive the initiative to make my customers and the community successful in a very real-world meaningful way.
Describe your experience with other non profits or serving as a board member. How does your experience prepare you for the role of a board member?
What do you see as the Board’s role in OpenStack’s success?
What do you think the top priority of the Board should be in 2014?
1. Clarify the definition of OpenStack – what is core, what is compliant, and what is not.
2. Understand where the strategic opportunities lie for OpenStack as a technology, and clear the path to ensure OpenStack gets there.
3. Fully enable any and every new entrant to OpenStack in a real way – developers, implementers, and users – with the right level of documentation, tools, community support, and vendor support.
Thanks, and appreciate your nomination to represent the OpenStack Foundation in 2014!
Until next time,