This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
It’s a transformative time for OpenStack.
And I know a thing or two about transformations.
Over the last two and a half years, I’ve managed to lose over 90 pounds.
(Yes, you read that right.)
It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.
Lessons I’ve Learned
When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.
With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.
While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.
1. Clarity on the Goal and the Motivation
It’s a very familiar story for many people. Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”
The year I was to turn 40, I realized that I was running out of time to make good on my word!
And there it was – my goal and my motivation.
So let’s turn to OpenStack – what is our goal and motivation as a project?
According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”
That’s our goal and motivation
- meet the needs of public and private clouds
- no matter the size
- simple to deploy
- very scalable
- open across all parameters
While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.
2. Staying Focused During the “Middle” of the Journey
When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.
After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before. Oftentimes, I was congratulated – as if I had reached my destination.
But I had not reached my destination.
While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal. And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.
OpenStack has come a long way in its fourteen releases:
- The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
- The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
- Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.
We’ve come a long way. But there is more to go to achieve our ultimate goal. Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.
We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.
3. Constantly Learning and Adapting
While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.
This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.
Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.
Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.
I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.
We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users. OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.
This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.
Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.
Room for Optimism
I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of. But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.
And it is a mission that we can certainly accomplish. I believe in our vision, our project, and our community.
And take it from me – reaching BIG milestones are very, very rewarding.
Until next time,
This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
Ask kids what they want to be when they grow up, and you will often hear aspirations like:
- “I want to be an astronaut, and go to outer space!”
- “I want to be a policeman / policewoman, and keep people safe!”
- “I want to be a doctor – I could find a cure for cancer!”
- “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
- “I want to be an engineer in a high performance computing department!”
OK, that last one rarely comes up.
Actually, I’ve NEVER heard it come up.
But here’s the irony of it all…
That last item is often a major enabler for all the other items.
Surprised? A number of people are.
For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.
The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.
HPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.
Here are a few examples:
- Do you enjoy driving a car or other vehicle? Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver. Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
- Have you fueled your car with gas recently? Thank an HPC department at the energy / oil & gas company. While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around. Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort. With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
- Have you been treated for illnesses with medical innovation? Thank an HPC department at the pharmaceutical firm and associated research hospitals. Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals. In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
- Have you ever been proactively contacted by your bank on suspected fraud on your account? Thank an HPC department at the financial institution. Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks. The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.
And the list goes on and on.
- Security enablement
- Banking risk analysis
- Space exploration
- Aircraft safety
- Forecasting natural disasters
- So much more…
If it’s a tough problem to solve, more likely than not, HPC is involved.
And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.
HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade. Open source is a major staple of this group of users, especially with deep adoption of Linux.
You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”) I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.
HPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model. Here are a few examples:
- The National Strategic Computing Initiative
- The Cancer Moonshot Program
- The Exascale Computing Project
- The STEM Program for Students (Science, Technology, Engineering, Math)
This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more. From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around. (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)
High performance computing: take a second look – it may be much more than you originally thought.
And maybe it can help revolutionize YOUR industry.
Until next time,
This is a duplicate of a blog I authored for HP, originally published at hp.nu/Lg3KF.
In a joint blog authored with theCube’s John Furrier and Scality’s Leo Leung, we pointed out some of the unique characteristics of data that make it act and look like a vector.
At that time, I promised we’d delve into specific customer uses for data and emerging data technologies – so let’s begin with our friends in the telecommunications and media industries, specifically around the topic of content distribution.
But let’s start at a familiar point for many of us…
If you’re like most people, when it comes to TV, movies, and video content, you’re an avid (sometimes binge-watching) fan of video streaming and video on-demand. More and more people are opting to view content via streaming technologies. In fact, a growing number of broadcast shows are viewed on mobile and streaming devices, as are a number of live events, such as this year’s NCAA basketball tournament via streaming devices.
These are fascinating data points to ponder, but think about what goes on behind them.
How does all this video content get stored, managed, and streamed?
Suffice it to say, telecom and media companies around the world are addressing this exact challenge with content delivery networks (CDN). There are a variety of interesting technologies out there to help develop CDNs, and one interesting new technology to enable this is object storage, especially when it comes to petabytes of data.
Here’s how object storage helps when it comes to streaming content.
- With streaming content comes a LOT of data. Managing and moving that data is a key area to address, and object storage handles it well. It allows telecom and media companies to effectively manage many petabytes of content with ease – many IT options lack that ability to scale. Features in object storage like replication and erasure coding allow users to break large volumes of data into bite size chunks, and disperse it over several different server nodes, and often times, several different geographic locations. As data is needed, it is rapidly re-compiled and distributed as needed.
- Raise your hand if you absolutely love to wait for your video content to load. (Silence.) The fact is, no one likes to see the status bar slowly creeping along, while you’re waiting for zombies, your futbol club, or the next big singing sensation to show up on the screen. Because object storage technologies are able to support super high bandwidth and millions of HTTP requests per minute, any customer looking to distribute media is able to allow their customers access to content with superior performance metrics. It has a lot to do with the network, but also with the software managing the data behind the network, and object storage fits the bill.
These are just two of the considerations, and there are many others, but object storage becomes an interesting technology to consider if you’re looking to get content or media online, especially if you are in the telecom or media space.
Want a real life example? Check out how our customer RTL II, a European based television station, addressed their video streaming challenge with object storage. It’s all detaile here in this case study – “RTL II shifts video archive into hyperscale with HP and Scality.” Using HP ProLiant SL4540 big data servers and object storage software from HP partner Scality, RTL II was able to boost their video transfer speeds by 10x!
Webinar this week! If this is a space you could use more education on, Scality and HP will be hosting a couple of webinars this week, specifically around object storage and content delivery networks. If you’re looking for more on this, be sure to join us – here are the details:
Session 1 (Time-friendly for European and APJ audiences)
- Who: HP’s big data strategist, Sanjeet Singh, and Scality VP, Leo Leung
- Date: Wed, Apr 8, 2015
- Time: 3pm Central Europe Summer / 8am Central US
- Registration Link
Session 2 (Time-friendly for North American audiences)
- Who: HP Director, Joseph George, and Scality VP, Leo Leung
- Date: Wed, Apr 8, 2015
- Time: 10am Pacific US / 12 noon Central US
- Registration Link
And now off to relax and watch some TV – via streaming video of course!
Until next time,
This is a joint blog I did with John Furrier of SiliconAngle / theCube and Leo Leung from Scality, originally published at http://bit.ly/1E6nQuR
Data is an interesting concept.
During a recent CrowdChat a number of us started talking about server based storage, big data, etc., and the topic quickly developed into a forum on data and its inherent qualities. The discussion led us to realize that data actually has a number of attributes that clearly define it – similar to how a vector has both a direction and magnitude.
Several of the attributes we uncovered as we delved into this notion of data as a vector include:
- Data Gravity: This was a concept developed by my friend, Dave McCrory, a few years ago, and it is a burgeoning area of study today. The idea is that as data is accumulated, additional services and applications are attracted to this data – similar to how a planet’s gravitational pull attracts objects to it. An example would be the number 10. If you the “years old” context is “attracted” to that original data point, it adds a certain meaning to it. If the “who” context is applied to a dog vs. a human being, it takes on additional meaning.
- Relative Location with Similar Data: You could argue that this is related to data gravity, but I see it as more of a poignant a point that bears calling out. At a Hadoop World conference many years ago, I heard Tim O’Reilly make the comment that our data is most meaningful when it’s around other data. A good example of this is medical data. Health information of a single individual (one person) may lead to some insights, but when placed together with data from a members of a family, co-workers on a job location, or the citizens of a town, you are able to draw meaningful conclusions. When grouped with other data, individual pieces of data take on more meaning.
- Time: This came up when someone posed the question “does anyone delete data anymore?” With the storage costs at scale becoming more and more affordable, we concluded that there is no longer an economic need to delete data (though there may be regulatory reasons to do so). Then came the question of determining what data was not valuable enough to keep, which led to the epiphany that data that might be viewed as not valuable today, may become significantly valuable tomorrow. Medical information is a good example here as well – capturing the data that certain individuals in the 1800’s were plagued with a specific medical condition may not seem meaningful at the time, until you’ve tracked data on specific descendants of his family being plagued by similar ills over the next few centuries. It is difficult to quantify the value of specific data at the time of its creation.
In discussing this with my colleagues, it became very clear how early we are in the evolution of data / big data / software defined storage. With so many angles yet to be discussed and discovered, the possibilities are endless.
This is why it is critical that you start your own journey to salvage the critical insights your data offers. It can help you drive efficiency in product development, it can help you better serve you constituents, and it can help you solve seemingly unsolvable problems. Technologies like object storage, cloud based storage, Hadoop, and more are allowing us to learn from our data in ways we couldn’t imagine 10 years ago.
And there’s a lot happening today – it’s not science fiction. In fact, we are seeing customers implement these technologies and make a turn for the better – figuring out how to treat more patients, enabling student researchers to share data across geographic boundaries, moving media companies to stream content across the web, and allowing financial institutions to detect fraud when it happens. Though the technologies may be considered “emerging,” the results are very, very real.
Over the next few months, we’ll discuss specific examples of how customers are making this work in their environments, tips on implementing these innovative technologies, some unique innovations that we’ve developed in both server hardware and open source software, and maybe even some best practices that we’ve developed after deploying so many of these big data solutions.
Until next time,
Joseph George – @jbgeorge
Director, HP Servers
Leo Leung – @lleung
John Furrier – @furrier
Founder of SiliconANGLE Media
Cohost of @theCUBE
CEO of CrowdChat
This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/The-HP-Big-Data-Reference-Architecture-It-s-Worth-Taking-a/ba-p/179502#.VMfTrrHnb4Z
I recently posted a blog on the value that purpose-built products and solutions bring to the table, specifically around the HP ProLiant SL4540 and how it really steps up your game when it comes to big data, object storage, and other server based storage instances.
Last month, at the Discover event in Barcelona, we announced the revolutionary HP Big Data Reference Architecture – a major step forward in how we, as a community of users, do Hadoop and big data – and it is a stellar example of how purpose-built solutions can revolutionize how you accelerate IT technology, like big data. We’re proud that HP is leading the way in driving this new model of innovation, with the support and partnership of the leading voices in Hadoop today.
Here’s the quick version on what the HP Big Data Reference Architecture is all about:
Think about all the Hadoop clusters you’ve implemented in your environment – they could be pilot or production clusters, hosted by developer or business teams, and hosting a variety of applications. If you’re following standard Hadoop guidance, each instance is most likely a set of general purpose server nodes with local storage.
For example, your IT group may be running a 10 node Hadoop pilot on servers with local drives, your marketing team may have a 25 node Hadoop production cluster monitoring social media on similar servers with local drives, and perhaps similar for the web team tracking logs, the support team tracking customer cases, and sales projecting pipeline – each with their own set of compute + local storage instances.
There’s nothing wrong with that set up – It’s the standard configuration that most people use. And it works well.
Just imagine if we made a few tweaks to that architecture.
- What if we replaced the good-enough general purpose nodes, and replaced them with purpose-built nodes?
- For compute, what if we used HP Moonshot, which is purpose-built for maximum compute density and price performance?
- For storage, what if we used HP ProLiant SL4540, which is purpose-built for dense storage capacity, able to get over 3PB of capacity in a single rack?
- What if we took all the individual silos of storage, and aggregated them into a single volume using the purpose-built SL4540? This way all the individual compute nodes would be pinging a single volume of storage.
- And what if we ensured we were using some of the newer high speed Ethernet networking to interconnect the nodes?
Well, we did.
And the results are astounding.
While there is a very apparent cost benefit and easier management, there is a surprising bump in performance in terms of read and write.
It was a surprise to us in the labs, but we have validated it in a variety of test cases. It works, and it’s a big deal.
And Hadoop industry leaders agree.
“Apache Hadoop is evolving and it is important that the user and developer communities are included in how the IT infrastructure landscape is changing. As the leader in driving innovation of the Hadoop platform across the industry, Cloudera is working with and across the technology industry to enable organizations to derive business value from all of their data. We continue to extend our partnership with HP to provide our customers with an array of platform options for their enterprise data hub deployments. Customers today can choose to run Cloudera on several HP solutions, including the ultra-dense HP Moonshot, purpose-built HP ProLiant SL4540, and work-horse HP Proliant DL servers. Together, Cloudera and HP are collaborating on enabling customers to run Cloudera on the HP Big Data architecture, which will provide even more choice to organizations and allow them the flexibility to deploy an enterprise data hub on both traditional and newer infrastructure solutions.” – Tim Stevens, VP Business and Corporate Development, Cloudera
“We are pleased to work closely with HP to enable our joint customers’ journey towards their data lake with the HP Big Data Architecture. Through joint engineering with HP and our work within the Apache Hadoop community, HP customers will be able to take advantage of the latest innovations from the Hadoop community and the additional infrastructure flexibility and optimization of the HP Big Data Architecture.” – Mitch Ferguson, VP Corporate Business Development, Hortonworks
And this is just a sample of what HP is doing to think about “what’s next” when it comes to your IT architecture, Hadoop, and broader big data. There’s more that we’re working on to make your IT run better, and to lead the communities to improved experience with data.
If you’re just now considering a Hadoop implementation or if you’re deep into your journey with Hadoop, you really need to check into this, so here’s what you can do:
- my pal, Greg Battas posted on the new architecture and goes technically deep into it, so give his blog a read to learn more about the details.
- Hortonworks has also weighed in with their own blog.
If you’d like to learn more, you can check out the new published reference architectures that follow this design featuring HP Moonshot and ProLiant SL4540:
- HP Big Data Reference Architecture: Cloudera Enterprise reference architecture implementation
- HP Big Data Reference Architecture: Hortonworks Data Platform reference architecture implementation
If you’re looking for even more information, reach out to your HP rep and mention the HP Big Data Reference Architecture. They can connect you with the right folks to have a deeper conversation on what’s new and innovative with HP, Hadoop, and big data. And, the fun is just getting started – stay tuned for more!
Until next time,
This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/Purpose-Built-Solutions-Make-a-Big-Difference-In-Extracting-Data/ba-p/173222#.VEUdYrEo70c
Indulge me as I flash back to the summer of 2012 at the Aquatics Center in London, England – it’s the Summer Olympics, where some of the world’s top swimmers, representing a host of nations, are about to kick off the Men’s 100m Freestyle swimming competition. The starter gun fires, and the athletes give it their all in a heated head to head match for the gold.
And the results of the race are astounding: USA’s Nathan Adrian took the gold medal with a time of 47.52 seconds, with Australia’s James Magnussen finishing a mere 0.01 seconds later to claim the silver medal! It was an incredible display of competition, and a real testament to power of the human spirit.
For an event demanding such precise timing, we can only assume that very sensitive and highly calibrated measuring devices were used to capture accurate results. And it’s a good thing they did – fractions of a second separated first and second place.
Now, you and I have both measured time before – we’ve checked our watches to see how long it has been since the workday started, we’ve used our cell phones to see how long we’ve been on the phone, and so on. It got the job done. Surely the Olympic judges at the 2012 Men’s 100m Freestyle had some of these less precise options available – why didn’t they just simply huddle around one of their wrist watches to determine the winner of the gold, silver and bronze?
OK, I am clearly taking this analogy to a silly extent to make a point.
When you get serious about something, you have to step up your game and secure the tools you need to ensure the job gets done properly.
There is a real science behind using purpose-built tools to solve complex challenges, and the same is true with IT challenges, such as those addressed with big data / scale out storage. There are a variety of infrastructure options to deal with the staggering amounts of data, but there are very few purpose built server solutions like HP’s ProLiant SL4500 product – a server solution built SPECIFCIALLY for big data and scale out storage.
The HP ProLiant SL4500 was built to handle your data. Period.
- It provides an unprecedented drive capacity with over THREE PB in a single rack
- It delivers scalable performance across multiple drive technologies like SSD, SAS or SATA
- It provides significant energy savings with shared cooling and power and reduced complexity with fewer cables
- It offers flexible configurations •A 1-node, 60 large form factor drive configuration, perfect for large scale object storage with software vendors like Cleversafe and Scality, or with open source projects like OpenStack Swift and Ceph
- A 2-node, 25 drive per node configuration, ideal for running Microsoft Exchange
- A 3-node, 15 drive per node configuration, optimal for running Hadoop and analytics applications
If you’re serious about big data and scale out storage, it’s time to considering stepping up your game with the SL4500. Purpose-built makes a difference, and the SL4500 was purpose-built to help you make sense of your data.
And if you’re here at Hadoop World this week, come on by the HP booth – we’d love to chat about how we can help solve your data challenges with SL4500 based solutions.
Until next time,