Archive

Posts Tagged ‘cloud’

Perlmutter powers up: Meet the new next-gen supercomputer at Berkeley Labs

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on May 27, 2021.

Just unveiled by the National Energy Research Scientific Computing Center at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, Perlmutter is now ready to play a key role in advanced computing, AI, data science, and more. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE.

I’m happy to share that the National Energy Research Scientific Computing Center (NERSC), a high-performance-computing user facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, announced that the phase 1 installation of its new supercomputer, Perlmutter, is now complete.

HPC-Cray Liquid Cooled Supercomputer.png

An important milestone for NERSC, the Department of Energy (DOE), HPE, and the HPC community

Perlmutter is the largest system that’s been built using the HPE Cray EX supercomputer shipped to date. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE for important research in areas such as climate modeling, early universe simulations, cancer research, high-energy physics, protein structure, material science, and more. Over the next few years, HPE will deliver three exascale supercomputers to the DOE, culminating in the two exaflop El Capitan system for Lawrence Livermore National Laboratory. 

The HPE Cray EX supercomputer is HPE’s first exascale-class HPC system. Perlmutter is using many of its exascale era innovations. Designed from the ground up for converged HPC and AI workloads, the HPE Cray EX supports a heterogenous computing model, allowing for the mixing of compute blades across processor and GPU architectures. This capability is important for the NERSC usage models. 

Perlmutter will be a heterogeneous system with both GPU-accelerated and CPU-only nodes to support  NERSC applications and users, including a mix of those who utilize GPUs and those who don’t.

Phase 1 is the GPU-accelerated portion of Perlmutter and contains 12 HPE Cray EX cabinets with 1500 AMD EPYC nodes each with 4 NVIDA A100 GPUs for a total of 6000 GPUs. Phase 2 will add an additional 3000 dual socket AMD EPYC CPU only nodes later this year housed in an additional 12 cabinets.

Perlmutter takes advantage of the HPE Slingshot Interconnect that incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility. Ethernet compatibility is important to NERSC’s use of the supercomputer as it allows new paths for connecting the system more broadly to their internal file system as well as the to the external internet. 

Perlmutter also enables a direct connection to the Cray ClusterStor E1000, an all-flash Lustre-based storage system. ClusterStor E1000 is the fastest storage system of its kind, transferring data at more than 5 terabytes/sec. The Perlmutter deployment is currently the world’s largest all-flash system—with 35 petabytes of usable storage. 

HPE Cray EX is more than just hardware innovations

HPE Cray software is designed to meet the needs of this new era of computing, where systems are growing larger and more complex with the need to accommodate converged HPC, AI, analytics, modeling ,and simulation workloads in a cloudlike environment. The HPE Cray software deployed with Perlmutter will be key to NERSC’s ability to deliver a new type of supercomputing experience to their users who are becoming more cloud-aware and cloud-focused.

The exascale era is just beginning

We here at HPE are very excited and committed to this journey with the DOE, as we help shepherd in a new class of supercomputers that will drive scientific research at a scale that is barely fathomable today. 

Perlmutter is the important first step on this path and we look forward to seeing the many benefits of diverse scientific discoveries that NERSC will deliver with its new HPE Cray EX supercomputer.

Read the complete NERSC news story: Berkeley Lab Deploys Next-Gen Supercomputer, Perlmutter, Bolstering U.S. Scientific Research

Running for the OpenStack Board: “All In” on OpenStack

January 5, 2017 Leave a comment

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

In thinking about the OpenStack community, our approach to the project going forward, and the upcoming Board elections, I’m reminded of a specific hand of the poker game Texas Hold ‘Em I observed a few years back between two players.

As one particular hand began, both players had similar chip stacks, and were each dealt cards that were statistically favorable to win.

The hand played out like most other hands – the flop, the turn, the river, betting, calling, etc.  And as the game continued toward its conclusion, those of us observing the game could see that one player was playing with the statistically better cards, and presumably the win.

But then the second player made a bold move that turned everything on its head.

He went “all in.”

The “all in” move in poker is one that commits all of your chips to the pot, and often requires your opponent to make a decision for most or all of their chips. It is an aggressive move in this scenario.

After taking some time to consider his options, the first player ultimately chose to fold his strong cards and cut his perceived losses, allowing the other player to claim the winnings.

And this prize can be claimed almost completely because of the “all in” strategy.

Clearly, going “all in” can be a very strong move indeed.

 

Decision Time in OpenStack

Next week – Monday, January 9 through Friday, January 13 – is an important week for the OpenStack community, as we elect the 2017 Individual Representatives to the Foundation’s Board of Directors.

I’m honored to have been nominated as a candidate for Board Director, to potentially serve the community again, as I did back in 2013.

Back in the summer of 2010, I was fortunate to be one of the few in the crowded ball room at the Omni Hotel in Austin, Texas, witnessing the birth of the OpenStack project. And it is amazing to see how far it has come – but with a tremendous amount of work yet to do.

allinOver the years, we’ve been fortunate to celebrate tremendous wins and market excitement. Other times, there were roadblocks to overcome.  And similar to the aforementioned poker game, we often had to analyze “the hand” we were dealt, “estimate the odds” of where cloud customers and the market was headed, and position ourselves to maximize chances for success – often trusting our instinct, when available data was incomplete at best.

And, as with many new projects that are in growth phase, our community was often put in a position to re-confirm our commitment to our mission. And our response was resounding and consistent on where we stood….

“All in.”

 

Remaining “All In” with OpenStack

As I mentioned a few weeks ago, it’s critical that we stay committed to the cause of OpenStack and its objective.

There are four key areas of focus for OpenStack, that I hope to advocate, if elected to the board.

  1. OpenStack adoption within the enterprise worldwide.  I am in the camp that very much believes in the private cloud (as well as public cloud), and that the open source and vendor communities need to put more effort and resources into ensuring OpenStack is the optimal private cloud out there, across all industries / geographies / etc.
  2. Designing and positioning OpenStack to address tangible business challenges.  The enterprise customer is not seeking a new technology – they looking for things like ways to make IT management more self service, a means to drive on-demand scalability of infrastructure and PaaS, and a way to operate workloads on-premise, AS WELL AS off-premise.
  3. Addressing the cultural IT changes that need to occur.  As cloud continues to permeate the enterprise IT organization, we need to deliver the right training and certifications to enable existing IT experts to transition to this new means of IT service.  If we can ensure these valuable people have a place in the new archetype, they will be our advocates as well.
  4. Championing the OpenStack operator.  The reality of cloud is not just in the using, but in the operating.  There is a strong contingent of operators within our community, and their role is critical to our success – we need to continue to enable this important function.

I’ve been fortunate to be a part of a number of technology movements in my career, just as they started to make the turn from innovative idea to consistent, reliable IT necessity. And this is why I continue to be excited about the prospect of OpenStack – I’m seeing growth with more customers, more use cases, more production implementations.

And, while there are may be detractors out there, coining catchy and nonsensical “as-a-Service” buzzwords, my position on OpenStack should sound familiar – because it hasn’t changed since Day One.

“All in.”

And, if given the opportunity, I hope to partner with you to get the rest of the world “all in” on OpenStack as well.

Until next time,

JOSEPH
@jbgeorge

Joseph George is the Vice President of Solutions Strategy at SUSE, and is a candidate for OpenStack Board of Directors.  OpenStack Elections take place on the week of January 9, 2017.
Click here to learn more.

 

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

Highlights from OpenStack Summit Barcelona

October 31, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

What a great week at the OpenStack Summit this past week in Barcelona! Fantastic keynotes, great sessions, and excellent hallway conversations.  It was great to meet a number of new Stackers as well as rekindle old friendships from back when OpenStack kicked off in 2010.

A few items of note from my perspective:

OpenStack Foundation Board of Directors Meeting

OpenStack Board In Session

As I mentioned in my last blog, it is the right of every OpenStack member to attend / listen in on each board meeting that the OpenStack Foundation Board of Directors holds.  I made sure to head out on Monday and attend most of the day.  There was a packed agenda so here a few highlights:

  • Interesting discussion around the User Committee project that board member Edgar Magana is working toward, with discussion on its composition, whether members should be elected, and if bylaw changes are warranted.  It was a deep topic that required further time, so the topic was deferred to a later discussion with work to be done to map out the details. This is an important endeavor for the community in my opinion – I will be keeping an eye on how this progresses.
  • A number of strong presentations by prospective gold members were delivered as they made their cases to be added to that tier. I was especially happy to see a number of Chinese companies presenting and making their case.  China is a fantastic growth opportunity for the OpenStack projecct, and it was encouraging to see players in that market discuss all they are doing for OpenStack in the region.  Ultimately, we saw City Network, Deutsche Telekom, 99Cloud and China Mobile all get voted in as Gold members.
  • Lauren Sell (VP of Marketing for the Foundation) spoke on a visionary model to where her team is investigating how our community can engage with other projects in terms of user events and conferences.  Kubernetes, Ceph, and other projects were named as examples.  This is a great indicator of how we’ve evolved, as it highlights that often multiple projects are needed to address actual business challenges.  A strong indicator of maturity for the community.

Two Major SUSE OpenStack Announcements

SUSE advancements in enterprise ready OpenStack made its way to the Summit in a big way this week.

  1. SUSE OpenStack Cloud 7:  While we are very proud to be one of the first vendors to provide an enterprise-grade Newton based OpenStack distribution, this release also offers features like new Container-as-a-Service capabilities and non-disruptive upgrade capabilities.

    Wait, non-disruptive upgrade?  As in, no downtime?  And no service interruptions?

    That’s right – disruption to service is a big no-no in the enterprise IT world, and now SUSE OpenStack Cloud 7 provides you the direct ability to stay live during OpenStack upgrade.

  2. Even more reason to become a COA.  All the buzz around the Foundation’s “Certified OpenStack Administrator” exam got even better this week when SUSE announced that the exam would now feature the SUSE platform as an option.

    And BIG bonus win – if you pass the COA using the SUSE platform, you will be granted

    1. the Foundation’s COA certification
    2. SUSE Certified Administrator in OpenStack Cloud certificatio

That’s two certifications with one exam.  (Be sure to specify the SUSE platform when taking the exam to take advantage of this option.)

There’s much more to these critical announcements so take a deeper look into them with these blogs by Pete Chadwick and Mark Smith.  Worth a read.

 Further Enabling the Enterprise

As you know, enterprise adoption of OpenStack is a major passion of mine – I’ve captured a couple more signs I saw this week of OpenStack continuing to head in the right direction.

  • Progress in Security. On stage this week, OpenStack was awarded the CII, the Core Infrastructure Initiative Best Practices badge.  The CII is a project out of the Linux Foundation project that validates open source projects, specific for security, quality and stability. By winning this award, OpenStack is now validated by a trusted third party and is 100% compliant.  Security FTW!
  • Workload-based Sample Configs.  This stable of assets has been building for some time, but OpenStack.org now boasts a number of reference architectures addressing some of the most critical workloads.  From web apps to HPC to video processing and more, there are great resources on how to get optimize OpenStack for these workloads.  (Being a big data fan, I was particularly happy with the big data resources here.)

I’d be interested in hearing what you saw as highlights as well – feel free to leave your thoughts in the comments section.

OK, time to get home, get rested – and do this all over again in 6 months in Boston.

(If you missed the event this time, OpenStack.org has you covered – start here to check out a number of videos from the event, and go from there.)

Until next time,

JOSEPH
@jbgeorge

 

Boston in 2017!

This Week: OpenStack Summit Barcelona!

October 26, 2016 Leave a comment

ossummit-barcelonaThis is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

In a few days, Stackers will congregate in beautiful Barcelona, Spain to kick off the bi-annual OpenStack Summit and User Conference, the 14th of its kind.

On the heels of the recent OpenStack Newton launch, we will see a wide variety of people, backgrounds, industries, and skill sets represented, all focused on learning about, sharing best practices on, and working on the future of OpenStack.

There are many great sessions, workshops, and evening events happening at the summit this coming week, but three in particular that I want to highlight.

OpenStack Board of Directors Meeting

Did you know that, since OpenStack is an open community, the OpenStack Foundation board meeting is open for members to attend and listen in to the discussion? It’s great for members to have this level of access , so take advantage of the openness built into the OpenStack community, and take a listen.

While there are some portions of the meeting that will be a closed session (rightly so), most of the meeting you’ll hear about progress in specific initiatives, comments on new members to the community, and hear back on future directions.

It’s a great experience that more of our members need to participate in, so I highly recommend it to members. You can check out the planned agenda and WebEx details here.

 

OpenStack Ops Meetup

I mentioned the OpenStack operator community in my last blog (“Renewing Focus on Bringing OpenStack to the Masses”), and how I feel strongly about championing the cause of the operators out there.

While many of us are focused on code design, quality, new projects, etc, the operators are tasked with implementing Openstack.  This involves the day-to-day effort of running OpenStack clouds, which include readying IT environments for OpenStack deployments, first hand implementation of the project, the ongoing maintenance and upgrade aspects of the cluster, and being driven by a specific business goal they will be measured by.

At this Summit, the Operators will be hosting an Ops Meetup to get into the meat of OpenStack Ops. Now this stands to be an intense, down-in-the weeds discussion – not for the faint of heart! 🙂  So if you are among the many tasked with getting OpenStack operational in your environment, head on over and get to know your peers in this space, swap stories of what works well, share best practices, and make connections you can stay in touch with for years to come.

Learn more about the Ops Meetup here.

Certified OpenStack Administrator (COA)

Are you aware that you can now be CERTIFIED as an OpenStack Admin?

The COA is an exam you can take to prove your ability to solve problems using both command line and graphical interfaces OpenStack, demonstrating that you have mastered a number of skills required to operate the solution.

At OpenStack Summit, there are a few COA activities occurring that you should be aware of:

  • COA 101. Anne Bertucio and Heidi Bretz of the OpenStack Foundation will be hosting a 30min beginner-level session on the topic of COA, touching on the why / what / how relating to the COA exam.  (More info here.)
  • COA booth. The Foundation Lounge at the Summit will feature an area dedicated to learning more about the COA.  A variety of OpenStack community volunteers will be pitching in to answer questions, find trainings, and even sign up for the COA.  I plan on helping out on Wednesday right after the morning keynotes, so stop by and let’s chat COA.
  • COA exams.  If you’re ready now to take the exam, head on over to https://www.openstack.org/coa/ and get the details on when you can take the exam.  The world needs Certified OpenStack Admins!

(PS – If you need help with some prep, SUSE’s happy to help there – click here to get details on COA training from SUSE.)

I’m looking forward to a great week of re-connecting with a number of you I haven’t seen in some time, so if you’re out at Barcelona, look me up – I’d love to hear what OpenStack project you’re working on, or learn about how you are implementing OpenStack, or where I can help with places OpenStack can further help you in your business objectives.

See you in Barcelona!

JOSEPH
@jbgeorge

Address Your Company’s Data Explosion with Storage That Scales

June 27, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.

How about some context?

Data-GrainsofSand

Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.

 

The Data Explosion Happening in YOUR Industry

Some interesting factoids:

  • An automated manufacturing facility can generate many terabytes of data in a single hour.
  • In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
  • Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
  • In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
  • Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.

The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.

And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick:  PETABYTES OF DATA.

Break the Status Quo!

 

Status Quo Doesn’t Cut It

I know what you’re thinking:  “What’s the problem? I‘ve had a storage solution in place for years.  It should be fine.”

Not quite.

  1. You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
  2. The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
  3. The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.

The good news is that there is a way to store data at this scale with better performance at a much better price point.

 

Open Source Scale Out Storage

Why is this route better?

  • It was designed from the ground up for scale.
    Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant.  (Cloud environments leverage a very similar model for computing.)
  • It’s cost is primarily commodity servers with hard drives and software.
    Traditional storage solutions are expensive to scale in capacity or performance.  Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale.
  • When you go open source, the software benefits get even better.
    Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.

 

Ready.  Set.  Go.

At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.

It delivers what we’ve talked about: open source scale out storage.  It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.

And with SUSE behind you, you’ll get full services and support to any level you need.

 

OK, enough talk – it’s time for you to get started.

And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage.  Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.)  It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.

Until next time,

JOSEPH
@jbgeorge

Tech in Real Life: Content Delivery Networks, Big Data Servers and Object Storage

April 6, 2015 Leave a comment

.

This is a duplicate of a blog I authored for HP, originally published at hp.nu/Lg3KF.

In a joint blog authored with theCube’s John Furrier and Scality’s Leo Leung, we pointed out some of the unique characteristics of data that make it act and look like a vector.

At that time, I promised we’d delve into specific customer uses for data and emerging data technologies – so let’s begin with our friends in the telecommunications and media industries, specifically around the topic of content distribution.

But let’s start at a familiar point for many of us…

If you’re like most people, when it comes to TV, movies, and video content, you’re an avid (sometimes binge-watching) fan of video streaming and video on-demand.  More and more people are opting to view content via streaming technologies.  In fact, a growing number of broadcast shows are viewed on mobile and streaming devices, as are a number of live events, such as this year’s NCAA basketball tournament via streaming devices.

These are fascinating data points to ponder, but think about what goes on behind them.

How does all this video content get stored, managed, and streamed?

Suffice it to say, telecom and media companies around the world are addressing this exact challenge with content delivery networks (CDN).  There are a variety of interesting technologies out there to help develop CDNs, and one interesting new technology to enable this is object storage, especially when it comes to petabytes of data.

Here’s how object storage helps when it comes to streaming content.

  • With streaming content comes a LOT of data.  Managing and moving that data is a key area to address, and object storage handles it well.  It allows telecom and media companies to effectively manage many petabytes of content with ease – many IT options lack that ability to scale.  Features in object storage like replication and erasure coding allow users to break large volumes of data into bite size chunks, and disperse it over several different server nodes, and often times, several different geographic locations.  As data is needed, it is rapidly re-compiled and distributed as needed.
  • Raise your hand if you absolutely love to wait for your video content to load.  (Silence.)  The fact is, no one likes to see the status bar slowly creeping along, while you’re waiting for zombies, your futbol club, or the next big singing sensation to show up on the screen.  Because object storage technologies are able to support super high bandwidth and millions of HTTP requests per minute, any customer looking to distribute media is able to allow their customers access to content with superior performance metrics.  It has a lot to do with the network, but also with the software managing the data behind the network, and object storage fits the bill.

These are just two of the considerations, and there are many others, but object storage becomes an interesting technology to consider if you’re looking to get content or media online, especially if you are in the telecom or media space.

Want a real life example? Check out how our customer RTL II, a European based television station, addressed their video streaming challenge with object storage.  It’s all detaile here in this case study – “RTL II shifts video archive into hyperscale with HP and Scality.”  Using HP ProLiant SL4540 big data servers and object storage software from HP partner Scality, RTL II was able to boost their video transfer speeds by 10x

Webinar this week! If this is a space you could use more education on, Scality and HP will be hosting a couple of webinars this week, specifically around object storage and content delivery networks.  If you’re looking for more on this, be sure to join us – here are the details:

Session 1 (Time-friendly for European and APJ audiences)

  • Who:  HP’s big data strategist, Sanjeet Singh, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time:  3pm Central Europe Summer / 8am Central US
  • Registration Link

Session 2 (Time-friendly for North American audiences)

  • Who:  HP Director, Joseph George, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time: 10am Pacific US / 12 noon Central US
  • Registration Link

And as always, for any questions at all, you can always send us an email at BigDataEcosystem@hp.com or visit us at www.hp.com/go/ProLiant/BigDataServer.

And now off to relax and watch some TV – via streaming video of course!

Until next time,

JOSEPH
@jbgeorge

NOW AVAILABLE: The Dell Red Hat Cloud Solution, powered by RHEL OpenStack Platform!

April 16, 2014 Leave a comment

.

This is a duplicate of a blog I posted on del.ly/60119gex.

This week, those of us on the OpenStack and Red Hat OpenStack teams are partying like its 1999! (For those of you who don’t get that reference, read this first.)

Let me provide some context…

In 1999, when Linux was still in the early days of being adopted broadly by the enterprise (similar to an open source cloud project we all know), Dell and Red Hat joined forces to bring the power of Linux to the mainstream enterprise space.

Fast forward to today, and we see some interesting facts:

  • Red Hat has become the world’s first billion dollar open source company
  • 1 out of every 5 servers sold annually runs Linux
  • Enterprise’s view of open source is far more receptive than in the past

So today – Dell and Red Hat are doing it again: this time with OpenStack.

Today, we announce the availability of the Dell Red Hat Cloud Solution, Powered by Red Hat Enterprise Linux OpenStack Platform – a hardware + software + services solution focused on enabling the broader mainstream market with the ability to run and operate OpenStack backed by Dell and Red Hat.  This is a hardened architecture, a validated distribution of OpenStack, additional software, and services / support to get you going and keep you going, and lets you:

  • Accelerate your time to value with jointly engineered open, flexible components and purpose engineered configurations to maximize choice and eliminate lock-in
  • Expand on your own terms with open, modular architectures and stable OpenStack technologies that can scale out to meet your evolving IT and business needs
  • Embrace agility with open compute, storage, and networking technologies to transform your application development, delivery, and management
  • Provide leadership to your organization with new agile, open IT services capable of massive scalability to meet dynamic business demands

Here is one more data point to consider – Dell’s IT organization is using the RHEL OpenStack Platform as a foundational element for incubating new technologies with a self-service cloud infrastructure. Now, that is pretty strong statement about how an OpenStack cloud can help IT drive innovation in a global scale organization.

At the end of the day, both Dell and Red Hat are committed to getting OpenStack to the enterprise with the right level of certification, validation, training, and support.

We’ve done it before with RHEL, and we’re going to do it again with OpenStack.

Until next time,

JOSEPH
@jbgeorge

 

 

It’s OpenStack Foundation Election Time!

December 5, 2013 Leave a comment
.
If you’re a member of the OpenStack community, you’re aware that the community is accepting nominations for the Foundation Board of Directors.
.
I was fortunate enough to be nominated by a couple of folks so I thought I’d post some of my details on my blog.
.
If you’re interested in getting me on the ballot (I need 10 nominations to do so), you can do so here: http://www.openstack.org/community/members/profile/1313
.
======
.

What is your relationship to OpenStack, and why is its success important to you? What would you say is your biggest contribution to OpenStack’s success to date?

In addition to serving on the OpenStack Foundation Board of Directors as a Gold Member Director from Dell, I was also the product manager that brought Dell’s first hardware + software + services OpenStack solution to market in July of 2011.  Now leading a team of professionals on OpenStack, we have brought multiple OpenStack solutions to market, including enabling Hyper-V with a market solution.
.

I believe OpenStack represents a trend that service providers and enterprise IT are making to deeper community collaboration on new technologies and practices, and I will continue to drive the initiative to make my customers and the community successful in a very real-world meaningful way.

.

Describe your experience with other non profits or serving as a board member. How does your experience prepare you for the role of a board member?

I have been active in a number of community and local church capacities that have enabled me to serve as a board member.  That, in addition to my past year as an OpenStack board member, has provided me a pragmatic view of how to grow a community from both a technical and a business perspective.
.
.

What do you see as the Board’s role in OpenStack’s success?

The Board should be the guardian of the business functions of the OpenStack community, as well as strategists as to where OpenStack CAN go and SHOULD go – to further attract more developers, more implementers, and more users.
.
.

What do you think the top priority of the Board should be in 2014?

I see this as three main priorities, though not exclusive of other priorities:
.

1.  Clarify the definition of OpenStack – what is core, what is compliant, and what is not.

2.  Understand where the strategic opportunities lie for OpenStack as a technology, and clear the path to ensure OpenStack gets there.

3.  Fully enable any and every new entrant to OpenStack in a real way – developers, implementers, and users – with the right level of documentation, tools, community support, and vendor support.

.

Thanks, and appreciate your nomination to represent the OpenStack Foundation in 2014!

Until next time,

JOSEPH
@jbgeorge

THIS JUST IN: Dell, SUSE, Microsoft, and Cloudbase Collaborate to Enable Hyper-V for OpenStack-based SUSE Cloud 2.0

September 24, 2013 Leave a comment

.

(Note this is a cross-post of a blog I posted on the official Dell blog site, the company I work for.  The original blog is at http://is.gd/Ie6s10.)

(Also note the press release this blog relates to is at http://is.gd/W7e1BZ.)

Dell and SUSE: refining the art and science of OpenStack cloud deployments

The Dell OpenStack cloud solutions team is excited to unveil our newly enhanced Dell SUSE Cloud Solution,  powered by OpenStack. This newly enhanced cloud infrastructure solution makes multi-hypervisor clouds a reality. It is now possible to operate data center cloud environments with nodes running KVM, Xen, and Hyper-V hypervisors.

So, what is so new here?

These enhancement to the Dell OpenStack Cloud Solution are delivered via Dell solution integration with SUSE Cloud 2.0 – an enterprise-ready OpenStack distribution for building private clouds now with the ability to support Hyper-V nodes installed by Crowbar.

We continue to listen to our customers, and understand their desire for choice. Through this solution we are providing customers a Microsoft hypervisor choice in addition to KVM. With support for multiple hypervisors, SUSE Cloud 2.0  provides extended flexibility so you can optimize cloud workloads on whatever hypervisor delivers the ideal operational and performance benefits in your environment. This flexibility is a key to efficiency in today’s hyper-heterogeneous data centers. After all, isn’t  this what cloud computing is all about?

What exactly is the Dell SUSE Cloud Solution?

Simply put, this solution is an end-to-end private cloud solution, with the following core components:

  • SUSE Cloud 2.0: SUSE’s enterprise-ready OpenStack distribution, with an integrated installation framework based on the Dell-initiated Crowbar open source project, enabling organizations to rapidly deploy OpenStack private clouds. This OpenStack distribution delivers all the integrated OpenStack projects including Nova, Swift, Cinder, Neutron, Keystone, Glance, and Dashboard.
  • SUSE Studio: The award-winning SUSE image building solution enables enterprises to rapidly adapt and deploy applications into the SUSE Cloud image repository or public clouds.
  • SUSE Manager: Manages Linux workloads and enables the efficient management, maintenance and monitoring of Linux workloads across physical, virtual, and public or private cloud environments.
  • Dell platforms and reference architecture: The Dell SUSE Cloud Solution includes a validated and certified reference architecture with Dell PowerEdge C6220 and R720/R720XD server systems and Dell networking infrastructure.
  • Dell and SUSE Professional services, support and training: Enterprise services for complete assessment, design, deployment and ongoing support, provided through a cooperative support model leveraging the combined capabilities of Dell and SUSE.

In addition to enhancing the core cloud software platform, the Dell and SUSE teams have delivered enhancements to the Dell Crowbar Operations Platform to rapidly instantiate hardware and deploy these multi-hypervisor environments.

Dell Crowbar is the robust open-source installation framework that simplifies, standardizes and automates the setup of OpenStack multi-hypervisor clouds. Dell and SUSE are actively developing and refining Crowbar, so you can fully configure your environment, automatically discover and configure new hardware platforms, and simply deploy the complete cloud software stack — all in a repeatable manner, in a fraction of the time required of manual efforts.

SUSE Studio and SUSE Manager are also available from SUSE, so you can quickly assemble applications into your image repository and easily monitor and maintain your deployed applications across your cloud resources.

Why consider open source solutions for your cloud?

Open source clouds allow you to innovate on open platforms and frameworks to accelerate your time-to-market and time-to-value. By leveraging community building and collaboration with the OpenStack project, you can gain direct control over your cloud infrastructure and the software you use to manage it – this is why OpenStack is the fastest growing open source project on the planet. Further, with OpenStack you have the opportunity to develop and refine our own features rather than wait for commercial vendors, who may or may not release the features you want when you want them.

Bottom line: OpenStack is faster, flexible, more cost efficient, and can be tuned for your environment.

How does OpenStack fit into Dell’s Cloud strategy?

Dell’s Cloud strategy is focused on three areas: enabling private cloud, deploying multi-cloud management and supplying cloud builders. OpenStack is key to our first pillar of enabling private clouds and allows us to provide customers with flexible, open cloud solutions so they can eliminate vendor lock-in and build solutions that best suit their needs.

Four Weeks, Four Webinars: A Deep Dive into the Dell SUSE Cloud Solution, Powered by OpenStack

Starting on September 26th, Dell and SUSE are hosting a webinar series for system administrators, DevOps engineers and solution architects. Please join us to get a step-by-step walkthrough of the joint Dell and SUSE OpenStack-based private cloud solution.

Shout out for OpenStack Summit 

To learn more and Dell and SUSE solutions for OpenStack Clouds live Come and visit Dell at OpenStack Summit in Hong Kong. It’s coming fast – See you there!

Until next time.

JOSEPH
@jbgeorge