Archive

Archive for the ‘open source’ Category

Renewing Focus on Bringing OpenStack to the Masses

October 17, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Happy Newton Release Week!

This 14th release of OpenStack is one we’ve all been anticipating, with its focus on scalability, resiliency, and overall user experience – important aspects that really matter to enterprise IT organizations, and that help with broader adoption of OpenStack with those users.

(More on that later.)

Six years of Community, Collaboration, and Growth

I was fortunate enough to have been at the very first OpenStack “meeting of the minds” at the Omni hotel in Austin, back in 2010, when just the IDEA of OpenStack was in preliminary discussions. Just a room full of regular everyday people, across numerous industries, who saw the need for a radical new open source cloud platform, and had a passion to make something happen.

And over the years, we’ve seen progress: the creation of the OpenStack Foundation, thousands of contributors to the project, scores of companies throwing their support and resources behind OpenStack, the first steps beyond North America to Europe and Asia (especially with all the OpenStack excitement in India, China, and Japan), numerous customers adopting our revolutionary cloud project, and on and on.

Power to the People

But this project is also about our people.

And we, as a community of individuals, have grown and evolved over the years as well – as have the developers, customers and end users we hope to serve with the project. What started out with a few visionary souls has now blossomed into a community of 62,000+ members from 629 supporting companies across 186 countries.

As a member of the community, I’ve seen positive growth in my own life since then as well.  I’ve been fortunate to have been part of some great companies – like Dell, HPE, and now SUSE.  And I’ve been able to help enterprise customers solve real problems with impactful open source solutions in big data, storage, HPC, and of course, cloud with OpenStack.

And there might have been one other minor area of self-transformation since those days as well, as the graphic illustrates… 🙂

JBG_BeforeAfter

Clearly we – the OpenStack project and the OpenStack people – are evolving for the better.

 

So Where Do We Go From Here?

In 2013, I was able to serve the community by being a Director on the OpenStack Foundation Board. Granted, things were still fairly new – new project ideas emerging, new companies wanting to sponsor, developers being added by the day, etc – but there was a personal focus I wanted to drive among our community.

“Bringing OpenStack to the masses.”

And today, my hope for the community remains the same.

While we celebrate all the progress we have made, here are some of my thoughts on what we, as a community, should continue to focus on to bring OpenStack to the masses.

Adoption with the Enterprise by Speaking their Language

Take a look at the most recent OpenStack User Survey. It provides a great snapshot into the areas that users need help.

  • “OpenStack is great to recommend, however there’s a fair amount of complexity that needs to be tackled if one wishes to use it.”
  • “OpenStack lacks far too many core components for anything other than very specialized deployments.”
  • “Technology is good, but no synergies between the sub-projects.”

2016 data suggests that enterprise customers are looking for these sorts of issues to be addressed, as well as security and management practices keeping pace with new features. And, with all the very visible security breaches in recent months, the enterprise is looking for open source projects to put more emphasis on security.  In fact, many of the customers I engage with love the idea of OpenStack, but still need help with these fundamental requirements they have to deal with.

Designing / Positioning OpenStack to Address Business Challenges

Have you ever wondered why so many tall office buildings have revolving doors? Isn’t a regular door simpler, less complex, and easier to use?  Why do we see so many revolving doors, when access can be achieved so much simpler with other means?

Because revolving doors don’t exist to solve access problems – they solve a heated-air loss problem.

When a door in a tall building is opened, cold air from outside forces it’s way inside when doors are open, pushing warmer / lighter air up – which is then lost through vents at the top of the building. Revolving doors limit that loss incredibly by sealing off portions of outside access when rotating.

Think of our project in the same way.OpenStackIndustries

  • What business challenges can be addressed today / near-term by implementing an OpenStack-based solution?
  • Beyond the customer set looking to build a hosted cloud to resell, what further applications can OpenStack be applied to?
  • How can OpenStack provide an industry-specific competitive advantage to the financial sector?  To healthcare?  To HPC?  To the energy sector?  How about retail or media?

Address the Cultural IT Changes that Need to Happen

I recently read a piece where Jonathan Bryce spoke to the “cultural IT changes that need to occur” – and I love that line of thinking.

Jonathan specifically said “What you do with Windows and Linux administrators is the bigger challenge for a lot of companies. Once you prove the technology, you need policies and training to push people.”

That is spot on.

What we are all working on with OpenStack will fundamentally shift how IT organizations will operate in the future. Let’s take the extra step, and provide guidance to our audiences on how they can evolve and adapt to the coming changes with training, tools, community support, and collaboration. A good example of this is the Certified OpenStack Administrator certification being offered by the Foundation, and training for the COA offered by the OpenStack partner ecosystem.

Further Champion the OpenStack Operator

Operators are on the front lines of implementing OpenStack, and “making things work.” There is no truer test of the validity of our OpenStack efforts than when more and more operators can easily and simply deploy / run / maintain OpenStack instances.

I am encouraged by the increased focus on documentation, connecting developers and operators more, and the growth of a community of operators sharing stories on what works best in practical implementation of OpenStack. And we will see this grow even more at the Operator Summit in Barcelona at OpenStack Summit (details here).

We are making progress here but there’s so much more we can do to better enable a key part of our OpenStack family – the operators.

The Future is Bright

Since we’re celebrating the Newton release, the quote from Isaac Newton on looking ahead is seems fitting…

“To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.”

When it comes to where OpenStack is heading, I’m greatly optimistic.  As it has always been, it will not be easy.  But we are making a difference.

And with continued and increased focus on enterprise adoption, addressing business challenges, aiding in the cultural IT change, and an increased focus on the operator, we can go to the next level.

We can bring OpenStack to the masses.

See you in Barcelona in a few weeks.

Until next time,

JOSEPH
@jbgeorge

 

Advertisements

Living Large: The Challenge of Storing Video, Graphics, and other “LARGE Data”

August 25, 2016 Leave a comment

.

UPDATE SEP 2, 2016: SUSE has released a brand new customer case study on OrchardPark_SUSEEnterpriseStorage“large data” featuring New York’s Orchard Park Police Department, focused on video storage.  Read the full story here!

————–

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Everyone’s talking about “big data” – I’m even hearing about “small data” – but those of you who deal in video, audio, and graphics are in the throes of a new challenge:  large data.

Big Data vs Large Data

It’s 2016 – we’ve all heard about big data in some capacity – generally speaking, it is truckloads of data points from various sources, magnanimous in its volume (LOTS of individual pieces of data), its velocity (the speed at which that data is generated), and its variety (both structured and unstructured data types).  Open source projects like Hadoop have been enabling a generation of analytics work on big data.

So what in the world am I referring to when I say “large data?”

For comparison, while “big data” is a significant number of individual data that is of “normal” size, I’m defining “large data” as an individual piece of data that is massive in its size. Large data, generally, is not required to have real-time or fast access, and is often unstructured in its form (ie. doesn’t conform to the parameters of relational databases).

Some examples of large data:

LargeDataChart1

 

Why Traditional Storage Has Trouble with Large Data

So why not just throw this into our legacy storage appliances?

Traditional storage solutions (much of what is in most datacenters today) is great at handling standard data.  Meaning, data that is:

  • average / normal in individual data size
  • structured and fits nicely in a relational database
  • when totaled up, doesn’t exceed ~400TB or so in total space

Unfortunately, none of this works for large data.  Large data, due to its size, can consume traditional storage appliances VERY rapidly.  And, since traditional storage was developed when data was thought of in smaller terms (megabytes and gigabytes), large data on traditional storage can bring about performance / SLA impacts.

So when it comes to large data, traditional storage ends up being consumed too rapidly, forcing us to consider adding expensive traditional storage appliances to accommodate.

“Overall cost, performance concerns, complexity and inability to support innovation are the top four frustrations with current storage systems.” – SUSE, Software Defined Storage Research Findings (Aug 2016)

Object Storage Tames Large Data

Object storage, as a technology, is designed to handle storage of large, unstructured data from the ground up.  And since it was built on scalable cloud principles, it can scale to terabytes, petabytes, exabytes, and theoretically beyond.

When you introduce open source to the equation of object storage software, the economics of the whole solution become even better. And since scale-out, open source object storage is essentially software running on commodity servers with local drives, the object storage should scale without issue – as more capacity is needed, you just add more servers.

When it comes to large data – data that is unstructured and individually large, such as video, audio, and graphics – SUSE Enterprise Storage provides the open, scalable, cost-effective, and performant storage experience you need.

SUSE Enterprise Storage – Using Object Storage to Tame Large Data

Here’s how:

  • It is designed from the ground-up to tackle large data.  The Ceph project, which is core of SUSE Enterprise Storage, is built on a foundation of RADOS (Reliable Autonomic Distributed Object Store), and leverages the CRUSH algorithm to scale data across whatever size cluster you have available, without performance hits.
  • It provides a frequent and rapid innovation pace. It is 100% open source, which means you have the power of the Ceph community to drive innovation, like erasure coding.  SUSE gives these advantages to their customers by providing a full updated release every six months, while other Ceph vendors give customers large data features only as part of a once-a-year release.
  • It offers pricing that works for large data. Many object storage vendors, both commercial and open source, choose to charge you based on how much data you store.  The price to store 50TB of large data is different than the price to store 100TB of large data, and there is a different price if you want to store 400TB of large data – even if all that data stays on one server!  SUSE chooses to provide their customers “per node” pricing – you pay subscription only as servers are added.  And when you use storage dense servers, like the HPE Apollo 4000 storage servers, you get tremendous value.

 

Why wait?  It’s time to kick off a conversation with your SUSE rep on how SUSE Enterprise Storage can help you with your large data storage needs.  You can also click here to learn more.

Until next time,

JOSEPH

@jbgeorge

Address Your Company’s Data Explosion with Storage That Scales

June 27, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.

How about some context?

Data-GrainsofSand

Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.

 

The Data Explosion Happening in YOUR Industry

Some interesting factoids:

  • An automated manufacturing facility can generate many terabytes of data in a single hour.
  • In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
  • Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
  • In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
  • Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.

The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.

And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick:  PETABYTES OF DATA.

Break the Status Quo!

 

Status Quo Doesn’t Cut It

I know what you’re thinking:  “What’s the problem? I‘ve had a storage solution in place for years.  It should be fine.”

Not quite.

  1. You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
  2. The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
  3. The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.

The good news is that there is a way to store data at this scale with better performance at a much better price point.

 

Open Source Scale Out Storage

Why is this route better?

  • It was designed from the ground up for scale.
    Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant.  (Cloud environments leverage a very similar model for computing.)
  • It’s cost is primarily commodity servers with hard drives and software.
    Traditional storage solutions are expensive to scale in capacity or performance.  Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale.
  • When you go open source, the software benefits get even better.
    Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.

 

Ready.  Set.  Go.

At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.

It delivers what we’ve talked about: open source scale out storage.  It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.

And with SUSE behind you, you’ll get full services and support to any level you need.

 

OK, enough talk – it’s time for you to get started.

And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage.  Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.)  It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.

Until next time,

JOSEPH
@jbgeorge

Tech in Real Life: Content Delivery Networks, Big Data Servers and Object Storage

April 6, 2015 Leave a comment

.

This is a duplicate of a blog I authored for HP, originally published at hp.nu/Lg3KF.

In a joint blog authored with theCube’s John Furrier and Scality’s Leo Leung, we pointed out some of the unique characteristics of data that make it act and look like a vector.

At that time, I promised we’d delve into specific customer uses for data and emerging data technologies – so let’s begin with our friends in the telecommunications and media industries, specifically around the topic of content distribution.

But let’s start at a familiar point for many of us…

If you’re like most people, when it comes to TV, movies, and video content, you’re an avid (sometimes binge-watching) fan of video streaming and video on-demand.  More and more people are opting to view content via streaming technologies.  In fact, a growing number of broadcast shows are viewed on mobile and streaming devices, as are a number of live events, such as this year’s NCAA basketball tournament via streaming devices.

These are fascinating data points to ponder, but think about what goes on behind them.

How does all this video content get stored, managed, and streamed?

Suffice it to say, telecom and media companies around the world are addressing this exact challenge with content delivery networks (CDN).  There are a variety of interesting technologies out there to help develop CDNs, and one interesting new technology to enable this is object storage, especially when it comes to petabytes of data.

Here’s how object storage helps when it comes to streaming content.

  • With streaming content comes a LOT of data.  Managing and moving that data is a key area to address, and object storage handles it well.  It allows telecom and media companies to effectively manage many petabytes of content with ease – many IT options lack that ability to scale.  Features in object storage like replication and erasure coding allow users to break large volumes of data into bite size chunks, and disperse it over several different server nodes, and often times, several different geographic locations.  As data is needed, it is rapidly re-compiled and distributed as needed.
  • Raise your hand if you absolutely love to wait for your video content to load.  (Silence.)  The fact is, no one likes to see the status bar slowly creeping along, while you’re waiting for zombies, your futbol club, or the next big singing sensation to show up on the screen.  Because object storage technologies are able to support super high bandwidth and millions of HTTP requests per minute, any customer looking to distribute media is able to allow their customers access to content with superior performance metrics.  It has a lot to do with the network, but also with the software managing the data behind the network, and object storage fits the bill.

These are just two of the considerations, and there are many others, but object storage becomes an interesting technology to consider if you’re looking to get content or media online, especially if you are in the telecom or media space.

Want a real life example? Check out how our customer RTL II, a European based television station, addressed their video streaming challenge with object storage.  It’s all detaile here in this case study – “RTL II shifts video archive into hyperscale with HP and Scality.”  Using HP ProLiant SL4540 big data servers and object storage software from HP partner Scality, RTL II was able to boost their video transfer speeds by 10x

Webinar this week! If this is a space you could use more education on, Scality and HP will be hosting a couple of webinars this week, specifically around object storage and content delivery networks.  If you’re looking for more on this, be sure to join us – here are the details:

Session 1 (Time-friendly for European and APJ audiences)

  • Who:  HP’s big data strategist, Sanjeet Singh, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time:  3pm Central Europe Summer / 8am Central US
  • Registration Link

Session 2 (Time-friendly for North American audiences)

  • Who:  HP Director, Joseph George, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time: 10am Pacific US / 12 noon Central US
  • Registration Link

And as always, for any questions at all, you can always send us an email at BigDataEcosystem@hp.com or visit us at www.hp.com/go/ProLiant/BigDataServer.

And now off to relax and watch some TV – via streaming video of course!

Until next time,

JOSEPH
@jbgeorge

The HP Big Data Reference Architecture: It’s Worth Taking a Closer Look…

January 27, 2015 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/The-HP-Big-Data-Reference-Architecture-It-s-Worth-Taking-a/ba-p/179502#.VMfTrrHnb4Z

I recently posted a blog on the value that purpose-built products and solutions bring to the table, specifically around the HP ProLiant SL4540 and how it really steps up your game when it comes to big data, object storage, and other server based storage instances.

Last month, at the Discover event in Barcelona, we announced the revolutionary HP Big Data Reference Architecture – a major step forward in how we, as a community of users, do Hadoop and big data – and it is a stellar example of how purpose-built solutions can revolutionize how you accelerate IT technology, like big data.   We’re proud that HP is leading the way in driving this new model of innovation, with the support and partnership of the leading voices in Hadoop today.

Here’s the quick version on what the HP Big Data Reference Architecture is all about:

Think about all the Hadoop clusters you’ve implemented in your environment – they could be pilot or production clusters, hosted by developer or business teams, and hosting a variety of applications.  If you’re following standard Hadoop guidance, each instance is most likely a set of general purpose server nodes with local storage.

For example, your IT group may be running a 10 node Hadoop pilot on servers with local drives, your marketing team may have a 25 node Hadoop production cluster monitoring social media on similar servers with local drives, and perhaps similar for the web team tracking logs, the support team tracking customer cases, and sales projecting pipeline – each with their own set of compute + local storage instances.

There’s nothing wrong with that set up – It’s the standard configuration that most people use.  And it works well.

However….

Just imagine if we made a few tweaks to that architecture.

  • What if we replaced the good-enough general purpose nodes, and replaced them with purpose-built nodes?
    • For compute, what if we used HP Moonshot, which is purpose-built for maximum compute density and  price performance?
    • For storage, what if we used HP ProLiant SL4540, which is purpose-built for dense storage capacity, able to get over 3PB of capacity in a single rack?
  • What if we took all the individual silos of storage, and aggregated them into a single volume using the purpose-built SL4540?  This way all the individual compute nodes would be pinging a single volume of storage.
  • And what if we ensured we were using some of the newer high speed Ethernet networking to interconnect the nodes?

Well, we did.

And the results are astounding.

While there is a very apparent cost benefit and easier management, there is a surprising bump in performance in terms of read and write. 

It was a surprise to us in the labs, but we have validated it in a variety of test cases.  It works, and it’s a big deal.

And Hadoop industry leaders agree.

“Apache Hadoop is evolving and it is important that the user and developer communities are included in how the IT infrastructure landscape is changing.  As the leader in driving innovation of the Hadoop platform across the industry, Cloudera is working with and across the technology industry to enable organizations to derive business value from all of their data.  We continue to extend our partnership with HP to provide our customers with an array of platform options for their enterprise data hub deployments.  Customers today can choose to run Cloudera on several HP solutions, including the ultra-dense HP Moonshot, purpose-built HP ProLiant SL4540, and work-horse HP Proliant DL servers.  Together, Cloudera and HP are collaborating on enabling customers to run Cloudera on the HP Big Data architecture, which will provide even more choice to organizations and allow them the flexibility to deploy an enterprise data hub on both traditional and newer infrastructure solutions.” – Tim Stevens, VP Business and Corporate Development, Cloudera

“We are pleased to work closely with HP to enable our joint customers’ journey towards their data lake with the HP Big Data Architecture. Through joint engineering with HP and our work within the Apache Hadoop community, HP customers will be able to take advantage of the latest innovations from the Hadoop community and the additional infrastructure flexibility and optimization of the HP Big Data Architecture.” – Mitch Ferguson, VP Corporate Business Development, Hortonworks

And this is just a sample of what HP is doing to think about “what’s next” when it comes to your IT architecture, Hadoop, and broader big data.  There’s more that we’re working on to make your IT run better, and to lead the communities to improved experience with data.

If you’re just now considering a Hadoop implementation or if you’re deep into your journey with Hadoop, you really need to check into this, so here’s what you can do:

  • my pal, Greg Battas posted on the new architecture and goes technically deep into it, so give his blog a read to learn more about the details.
  • Hortonworks has also weighed in with their own blog.

If you’d like to learn more, you can check out the new published reference architectures that follow this design featuring HP Moonshot and ProLiant SL4540:

If you’re looking for even more information, reach out to your HP rep and mention the HP Big Data Reference Architecture.  They can connect you with the right folks to have a deeper conversation on what’s new and innovative with HP, Hadoop, and big data. And, the fun is just getting started – stay tuned for more!

Until next time,

JOSEPH

@jbgeorge

Purpose-Built Solutions Make a Big Difference In Extracting Data Insights: HP ProLiant SL4500

October 20, 2014 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/Purpose-Built-Solutions-Make-a-Big-Difference-In-Extracting-Data/ba-p/173222#.VEUdYrEo70c

Indulge me as I flash back to the summer of 2012 at the Aquatics Center in London, England – it’s the Summer Olympics, where some of the world’s top swimmers, representing a host of nations, are about to kick off the Men’s 100m Freestyle swimming competition. The starter gun fires, and the athletes give it their all in a heated head to head match for the gold.

And the results of the race are astounding: USA’s Nathan Adrian took the gold medal with a time of 47.52 seconds, with Australia’s James Magnussen finishing a mere 0.01 seconds later to claim the silver medal! It was an incredible display of competition, and a real testament to power of the human spirit.

For an event demanding such precise timing, we can only assume that very sensitive and highly calibrated measuring devices were used to capture accurate results. And it’s a good thing they did – fractions of a second separated first and second place.

Now, you and I have both measured time before – we’ve checked our watches to see how long it has been since the workday started, we’ve used our cell phones to see how long we’ve been on the phone, and so on. It got the job done. Surely the Olympic judges at the 2012 Men’s 100m Freestyle had some of these less precise options available – why didn’t they just simply huddle around one of their wrist watches to determine the winner of the gold, silver and bronze?

OK, I am clearly taking this analogy to a silly extent to make a point.

When you get serious about something, you have to step up your game and secure the tools you need to ensure the job gets done properly.

There is a real science behind using purpose-built tools to solve complex challenges, and the same is true with IT challenges, such as those addressed with big data / scale out storage. There are a variety of infrastructure options to deal with the staggering amounts of data, but there are very few purpose built server solutions like HP’s ProLiant SL4500 product – a server solution built SPECIFCIALLY for big data and scale out storage.

The HP ProLiant SL4500 was built to handle your data. Period.

  • It provides an unprecedented drive capacity with over THREE PB in a single rack
  • It delivers scalable performance across multiple drive technologies like SSD, SAS or SATA
  • It provides significant energy savings with shared cooling and power and reduced complexity with fewer cables
  • It offers flexible configurations •A 1-node, 60 large form factor drive configuration, perfect for large scale object storage with software vendors like Cleversafe and Scality, or with open source projects like OpenStack Swift and Ceph
  • A 2-node, 25 drive per node configuration, ideal for running Microsoft Exchange
  • A 3-node, 15 drive per node configuration, optimal for running Hadoop and analytics applications

If you’re serious about big data and scale out storage, it’s time to considering stepping up your game with the SL4500. Purpose-built makes a difference, and the SL4500 was purpose-built to help you make sense of your data.

You can learn more about the SL4500 by talking to your HP rep or by visiting us online at HP ProLiant SL4500 Scalable Systems or at Object Storage Software for ProLiant.

And if you’re here at Hadoop World this week, come on by the HP booth – we’d love to chat about how we can help solve your data challenges with SL4500 based solutions.

Until next time,

Joseph George

@jbgeorge

NOW AVAILABLE: The Dell Red Hat Cloud Solution, powered by RHEL OpenStack Platform!

April 16, 2014 Leave a comment

.

This is a duplicate of a blog I posted on del.ly/60119gex.

This week, those of us on the OpenStack and Red Hat OpenStack teams are partying like its 1999! (For those of you who don’t get that reference, read this first.)

Let me provide some context…

In 1999, when Linux was still in the early days of being adopted broadly by the enterprise (similar to an open source cloud project we all know), Dell and Red Hat joined forces to bring the power of Linux to the mainstream enterprise space.

Fast forward to today, and we see some interesting facts:

  • Red Hat has become the world’s first billion dollar open source company
  • 1 out of every 5 servers sold annually runs Linux
  • Enterprise’s view of open source is far more receptive than in the past

So today – Dell and Red Hat are doing it again: this time with OpenStack.

Today, we announce the availability of the Dell Red Hat Cloud Solution, Powered by Red Hat Enterprise Linux OpenStack Platform – a hardware + software + services solution focused on enabling the broader mainstream market with the ability to run and operate OpenStack backed by Dell and Red Hat.  This is a hardened architecture, a validated distribution of OpenStack, additional software, and services / support to get you going and keep you going, and lets you:

  • Accelerate your time to value with jointly engineered open, flexible components and purpose engineered configurations to maximize choice and eliminate lock-in
  • Expand on your own terms with open, modular architectures and stable OpenStack technologies that can scale out to meet your evolving IT and business needs
  • Embrace agility with open compute, storage, and networking technologies to transform your application development, delivery, and management
  • Provide leadership to your organization with new agile, open IT services capable of massive scalability to meet dynamic business demands

Here is one more data point to consider – Dell’s IT organization is using the RHEL OpenStack Platform as a foundational element for incubating new technologies with a self-service cloud infrastructure. Now, that is pretty strong statement about how an OpenStack cloud can help IT drive innovation in a global scale organization.

At the end of the day, both Dell and Red Hat are committed to getting OpenStack to the enterprise with the right level of certification, validation, training, and support.

We’ve done it before with RHEL, and we’re going to do it again with OpenStack.

Until next time,

JOSEPH
@jbgeorge