Archive

Archive for the ‘Uncategorized’ Category

A more sustainable world with high-performance computing

June 14, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on June 14, 2021.

The new era of computing arrives with an increased emphasis on the importance of sustainability. Discover how HPE is delivering on its sustainability commitments with innovative, environmentally friendly HPC and AI solutions plus flexible as-a-service and financing options.

HPE-HPC-AI-sustainability-blog.png

The data explosion is not slowing down. In fact, by 2025, IDC predicts that worldwide data will grow 61%, to 175 zettabytes (175 x 1021 bytes).1 This explosion of built-in intelligence, hyper-connectivity, and data from the edge is reshaping markets, disrupting every industry, and transforming how we live and work.

Answers to some of society’s most pressing challenges across medicine, climate change, space, and more are buried in massive pools of data. As a result, everyone—from small private companies to the largest government labs—will need technology that can perform quadrillions of operations per second to turn massive amounts of data into actionable insights. These new converged workloads are creating new workflows that drive the need for real-time, extreme scale systems.

Welcome to the exascale era

This new era of computing is quickly becoming the new norm for modern business operations. As such, it it requires an extension of the power of supercomputing with the flexibility and extensibility of cloud technologies. The capabilities of these exascale systems can enable new areas of research and discovery that will support business imperatives for security, economic competitiveness, and societal good for years to come. In fact, HPC exascale technology powers research in the areas of climate study, renewable energy, new materials research that can be used in applications to more sustainable living.

Achieving these new levels of computing power proves energy intensive and calls for an approach addressing both the costs and environmental implications of this level of power consumption.

Processors are now exceeding 200W and GPUs are operating at over 300W with components continuing to get even more power hungry going forward. High-density rack configurations in the high performance computing space are moving from 20kW to 40kW—with estimates reaching up beyond 70kW per rack in 2022. With server lifecycles lasting three-to-five years, you need to choose systems and adopt cooling strategies that can stand the test of time while still performing at the highest levels.

In the quest for higher levels of performance, it can be easy to lose sight of the fact that efficient energy usage is a key enabler of the mission to accomplish more work in the same footprint for less cost. Energy efficiency of both the data center facility and the IT equipment itself is of equal concern to ensure that the majority of the energy consumed is put to good use for actual computing. In other words, the power that is delivered to the productive portion of the system is the power that matters most.

A holistic approach coupled with a strong commitment to sustainability

As leaders in the HPC industry, HPE is producing a new breed of computing systems that are more intelligently designed and efficient than ever before. Our HPC and AI solutions encompass a supercomputing architecture across compute, interconnect, software, storage, and services—delivered on premises, hybrid, or as-a-service

We are at the forefront of transforming industries with technology and enabling our customers to harness data at exascale to advance scientific discovery and solve the world’s toughest challenges. HPE is an industry leader in sustainability, delivering customer outcomes with smart, future-proofed technology. By offering as-a-service, we are expanding options to access to these powerful solutions.

With each generation of HPC technology, we strive to dramatically improve the processing power of next-generation systems while reducing energy consumption. Our fresh approach to energy efficiency is based on an ongoing promise to fundamentally rethink the way large-scale systems are designed and built. For instance, to counteract the high amount of heat generated from the computationally-demanding activities, our solutions have liquid cooling techniques to efficiently cool large-scale systems, lowering energy needs and cost of operations.

This video confirms how and why sustainability is at the heart of our commitment to developing a high-performance computing ecosystem.https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FYmoBQAnOozY&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DYmoBQAnOozY&image=http%3A%2F%2Fi.ytimg.com%2Fvi%2FYmoBQAnOozY%2Fhqdefault.jpg&key=b0d40caa4f094c68be7c29880b16f56e&type=text%2Fhtml&schema=youtube

Sustainable supercomputing in action

HPE and Cray have over 70 supercomputers on the Green 500 list of the world’s most energy-efficient supercomputers, with features that optimize energy effectiveness in order to do exponentially more with less.

Physical space is another major concern in the HPC community. All of our solutions are density-optimized to pack incredible performance into a small data center space. Our storage options also work to minimize space and power taken up by storage to focus the largest amount of performance output on compute. Our industry-leading designs can cut the amount of space needed to run operations by half, resulting in energy efficiencies, maximizing stability and sustainability and future-proofing customer operations.

From a software perspective, we offer customers the ability to automatically collect and analyze power metrics for all hardware—CPU, GPU, rack, chassis, nodes, rack AC, bulk DC, and CDUs. The software also supports HPE ARCS, so users can analyze the power and cooling metrics and react to preconfigured alerts such as water leakage, power supply failure, or overheating.

Additionally, we are starting to add ML AIOps capabilities to our HPC system management software which offers administrator monitoring and management of all aspects of a cluster, including power and cooling.

Supporting HPE’s transition to an everything as-a-service company, HPC offers on HPE GreenLake. Our as-a-service model is enabling customers to reduce operational inefficiencies due to low utilization rates, overprovisioning and obsolete equipment by up to 30-to-40%. The good news is, the HPC community can now also leverage the benefits of this model.

Finally, we work closely with HPE Financial Services through the Asset Upcycling Services to help customers transition from their current infrastructure with responsible retiring, removal and disposal of IT equipment, and refurbishing IT to prevent the need for disposal.

Energy efficiency and environmental footprint play a crucial role in the long-term sustainability of HPC

Improving the energy efficiency of systems solving some of the world’s most complex problems will guarantee the capacity to achieve even greater computing possibilities to drive positive social change.

At HPE, we have transformed all aspects of HPC to enhance computing power without prohibitive increases in energy that can lead to higher costs of operation and cause irrevocable environmental damage. We lead the market in energy efficiency research and power-efficient design.

Make no mistake. We are here to define the new era of computing.

HPE Discover 2021: Can’t-miss sessions showcase how HPC is powering digital transformation

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on June 1, 2021.

Join us at HPE Discover 2021 to learn how innovative high-performance computing (HPC) solutions and technologies are driving digital transformation from enterprise to edge.

BLOG2-HPC_MCS.png

As our premier virtual event, HPE Discover 2021 is where the next wave of digital transformation begins—and high-performance computing is playing a critical role in that transformation. Join us for three days packed with actionable live and on-demand sessions, key announcements, and much more.

Register here for HPE Discover 2021.

How HPC and supercomputing is accelerating digital transformation

Welcome to the exascale era, where HPE HPC solutions scale up or scale out, on-premises or in the cloud, with purpose-built storage and software. Today, HPC is powering innovation for artificial intelligence, scientific research, drug discovery, advanced computing, engineering, and much more. This means all your workloads within your economic requirements.

Ready to learn more? Here’s a list of recommended spotlight and breakout sessions plus demos to help you plan your Discover  2021 experience—and dive into all things HPC.

Top recommended HPC sessions

Unlocking insight to drive Innovation from edge to exascale   SL4447

The next wave of digital transformation will be driven by unlocking the full potential of data using purpose built software and infrastructure that speeds time to adoption and rapid scaling of HPC, AI, and analytics. Please join us to learn how HPE is enabling this data driven future from edge-to-core-cloud, at any scale—for every enterprise.

Challenges scaling AI into production? Learn how to fix them   B4452

If you’re struggling to unlock the full potential of AI, spend 8 minutes with us to learn how to move past the most common challenges and achieve AI at any scale.

Accelerating your AI training models   B4424

Training AI models are often both compute and data intensive due to the massive size and complexity of new algorithms. In this session learn how new innovation from HPE workload optimized systems, such as in-memory capability paired with mixed acceleration, dramatically speeds up ingest and time to production for these AI models.

From the world’s largest AI machines to your first 17″ AI edge data center   DEMO4459

In the age of insight AI is everywhere. HPE provides end-to-end AI infrastructure for all environments for organizations of all sizes for all industries or mission areas. This session gives an overview of the portfolio and demonstrates AI solution examples from initial adoption to large scale production.

Unleash the power of analytics for your enterprise at the edge   B4451

Unlocking insights from data has never been more critical. Soon the majority of new enterprise data will be created outside of the data center or public cloud, at the edge. Learn how HPE helps customers gain insights with AI at the edge in a variety of use cases and with the flexibility to consume as-a-service.

Hewlett Packard Labs and Carnegie Clean Energy revolutionize wave energy with reinforcement learning   B4365

Wave energy capture brings the unique challenges of complex wave dynamics, modelling errors, and changes of generator dynamics. Hear from Hewlett Packard Labs and Carnegie Clean Energy on their mission to develop a self-learning wave energy converter using deep reinforcement learning technology—pushing trustworthiness in the next generation of AI.

 Live demos

We will also be featuring live on-location demos at Chase Center and the Mercedes Formula 1 Factory, scheduled to take place on June 22 or 23 (depending on which region you’re joining from). During the times specified below, HPE will have experts available to help answer your questions.

  • AMS: 11:00 AM – 11:45 PM PDT and 11:45 AM – 12:30 PM PDT on Tuesday (Day 1)
  • APJ: 2:30 PM – 3:15PM JST and 3:15 PM – 4:30 PM JST on Wednesday (Day 1)
  • EMEA: 12:30 PM – 1:15 PM CET and 1:15 PM – 2:00 PM on Wednesday (Day 1)

Interested in other sessions? We invite you to explore the full line-up of sessions on our content catalog and build your own agenda. You can also view the agenda for each region here

Build your playlist

Want to build your own playlist? Here’s how. Once registered for HPE Discover, log in to the virtual platform to view each of the keynotes, sessions, demos, etc. You can filter based on content type, areas of interest, or keyword search, etc. Then simply click on the “+” icon to add the item to your My Playlist. You can also download your playlist into your preferred personal calendar.

Register now.

We look forward to seeing you virtually at HPE Discover 2021!

Perlmutter powers up: Meet the new next-gen supercomputer at Berkeley Labs

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on May 27, 2021.

Just unveiled by the National Energy Research Scientific Computing Center at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, Perlmutter is now ready to play a key role in advanced computing, AI, data science, and more. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE.

I’m happy to share that the National Energy Research Scientific Computing Center (NERSC), a high-performance-computing user facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, announced that the phase 1 installation of its new supercomputer, Perlmutter, is now complete.

HPC-Cray Liquid Cooled Supercomputer.png

An important milestone for NERSC, the Department of Energy (DOE), HPE, and the HPC community

Perlmutter is the largest system that’s been built using the HPE Cray EX supercomputer shipped to date. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE for important research in areas such as climate modeling, early universe simulations, cancer research, high-energy physics, protein structure, material science, and more. Over the next few years, HPE will deliver three exascale supercomputers to the DOE, culminating in the two exaflop El Capitan system for Lawrence Livermore National Laboratory. 

The HPE Cray EX supercomputer is HPE’s first exascale-class HPC system. Perlmutter is using many of its exascale era innovations. Designed from the ground up for converged HPC and AI workloads, the HPE Cray EX supports a heterogenous computing model, allowing for the mixing of compute blades across processor and GPU architectures. This capability is important for the NERSC usage models. 

Perlmutter will be a heterogeneous system with both GPU-accelerated and CPU-only nodes to support  NERSC applications and users, including a mix of those who utilize GPUs and those who don’t.

Phase 1 is the GPU-accelerated portion of Perlmutter and contains 12 HPE Cray EX cabinets with 1500 AMD EPYC nodes each with 4 NVIDA A100 GPUs for a total of 6000 GPUs. Phase 2 will add an additional 3000 dual socket AMD EPYC CPU only nodes later this year housed in an additional 12 cabinets.

Perlmutter takes advantage of the HPE Slingshot Interconnect that incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility. Ethernet compatibility is important to NERSC’s use of the supercomputer as it allows new paths for connecting the system more broadly to their internal file system as well as the to the external internet. 

Perlmutter also enables a direct connection to the Cray ClusterStor E1000, an all-flash Lustre-based storage system. ClusterStor E1000 is the fastest storage system of its kind, transferring data at more than 5 terabytes/sec. The Perlmutter deployment is currently the world’s largest all-flash system—with 35 petabytes of usable storage. 

HPE Cray EX is more than just hardware innovations

HPE Cray software is designed to meet the needs of this new era of computing, where systems are growing larger and more complex with the need to accommodate converged HPC, AI, analytics, modeling ,and simulation workloads in a cloudlike environment. The HPE Cray software deployed with Perlmutter will be key to NERSC’s ability to deliver a new type of supercomputing experience to their users who are becoming more cloud-aware and cloud-focused.

The exascale era is just beginning

We here at HPE are very excited and committed to this journey with the DOE, as we help shepherd in a new class of supercomputers that will drive scientific research at a scale that is barely fathomable today. 

Perlmutter is the important first step on this path and we look forward to seeing the many benefits of diverse scientific discoveries that NERSC will deliver with its new HPE Cray EX supercomputer.

Read the complete NERSC news story: Berkeley Lab Deploys Next-Gen Supercomputer, Perlmutter, Bolstering U.S. Scientific Research

One scientist’s journey to accelerate drug discovery for COVID-19 using cloud-based supercomputing

July 2, 2020 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the HPE Newsroom on May 5, 2020.

In the fight against COVID-19, Dr. Jerome Baudry, Ms. Pei-Ling Chan Chair and Professor in the Department of Biological Sciences at The University of Alabama in Huntsville (UAH), is leveraging computational tools to significantly reduce time to study critical stages of drug design on our path to a cure. And he’s inviting us all along for the ride.

VIDEO: Dr. Jerome Baudry begins chronicling his mission to COVID-19 drug discovery

Forming a formidable attack against COVID-19

Dr. Baudry, a distinguished researcher in molecular biophysics, is taking a molecular docking approach to predict the binding – or “lock and key” interaction – between the viral protein in the coronavirus and molecules that can counterattack its functions.

The molecules that he and his team are using can be found in chemical compounds and other natural elements in plants, fungi, and sometimes animals, to understand how they can be applied to drug development. They’re researching the efficacy of these natural products in attacking the COVID-19 protein to figure out which ones can help to treat or mitigate the effects of the virus. These outcomes are then further tested in pre-clinical and clinical studies.

Accelerating insights from molecular events in all stages of drug design


The team has one mission: to get accurate results and fast. Their plan is to identify which drugs work and which don’t. And to do so in the most efficient way possible, they are modeling and simulating molecular events that take place in each of the various, critical stages in the drug design process. But, that task requires timely access to massive computing power and targeted modeling and simulation capabilities to build complex computational models and deep datasets.

That’s where HPE comes into play.

HPE teams up with Dr. Baudry to accelerate drug discovery

At HPE, we believe in being a force for good, so when the COVID-19 pandemic struck, HPE quickly made available supercomputing resources, along with a dedicated technical staff, free of charge to help scientists tackle complex research. That’s when we met Dr. Baudry and set him and his team up on HPE’s Sentinel supercomputer, which can perform 147 trillion floating point operations per second and store 830 terabytes of data. Sentinel – which is as fast as the earth’s entire population performing 20,000 calculations per second – is significantly accelerating discovery and saving months of research time and hundreds of thousands of dollars.

HPE’s Sentinel system is a Cray XC50 supercomputer, providing extreme scalability and massive computing power, and uses Intel® Xeon® Scalable processors. Through a partnership with Microsoft Azure, Dr. Baudry is able to access the supercomputer through the Azure cloud.

Early milestone achieved on HPE’s Sentinel: Performing 200K molecular docking calculations in 10 hours


Dr. Baudry and team recently kicked off their research journey using HPE’s Sentinel and have already successfully performed 200,000 molecular docking calculations. Now, how did they get this number? The team took a subset of 20,000 molecules of natural compounds from a larger database of 200,000 and calculated how they docked with the virus’ protein to counterattack COVID-19. And they did this with the viral protein that’s positioned in 10 different ways to increase possibilities. These 200K calculations were performed in just 10 hours compared to the few weeks it would typically take on a regular computer cluster. Getting these timely outcomes enables the research team to move projects forward more quickly in the drug design process, and on to experimental drug testing.

Seeing initial progress like this is incredibly inspiring. It’s been remarkable to see the teams at HPE quickly join forces with Dr. Baudry and co-researchers to get projects up and running and make breakthroughs in COVID-19 drug discovery.

And this is just the beginning! We’re excited to follow Dr. Baudry’s journey and welcome you to tune into a regular video blog series where he’ll share updates on his research, including challenges and success stories. Check out this page for regular timeline updates:

One Scientist’s Journey to Accelerate Drug Discovery for COVID-19

Categories: Uncategorized

Cray ClusterStor in Azure Demonstrates Amazing Performance

July 2, 2020 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site on Nov 19, 2019.

HPC Storage

In April, we announced three offers to expand Cray supercomputing in the cloud capabilities for customers: Cray ClusterStor in Azure (for HPC storage), Cray in Azure for Manufacturing, and Cray in Azure for Electronic Design Automation.

We’ve seen strong interest in all three offers among a number of customers over the past months, all seeking out ways to run HPC and AI jobs on Cray in Azure. But we’ve seen particularly intense interest in the Cray ClusterStor in Azure offer.

Managing data storage and its accessibility is a common challenge among nearly all of these organizations, so it’s not a total surprise. HPC jobs at scale tend to generate a significant amount of data, and that data is critical for continued evaluation and processing.

ClusterStor in Azure is particularly attractive to customers looking for HPC storage options in the cloud because it helps meet those needs, and it’s an incredibly scalable, competitively priced option for high-performance storage. It offers a Lustre®-based, single-tenant, bare metal, and fully managed HPC environment in Microsoft Azure, and is available in easy-to-consume small, medium, and large configurations. ClusterStor in Azure delivers the exact performance, speed, scalability, data protection, and availability you need.

But, how would Cray ClusterStor in Azure perform?

ClusterStor typically performs great in data centers, but how would it respond in a cloud data center? The power envelope is different, the network technology is different — along with numerous other variables that change going from an on-premise to a cloud data center.

As a test, Cray and Microsoft worked with customers on a use case to test this performance. Our chosen use case included imaging, modeling, and simulation. We decided that the best way to test ClusterStor in Azure would be to independently measure read-performance and write-performance.

The Azure configuration was designed to simulate imaging jobs utilizing a variety of pre-stack and post-stack migration, full-waveform inversion, and real-time migration techniques.

Specifically, here are the specs for what we started with:

• 468 Azure HB VMs totaling over 28,000 AMD® EPYC® 1st gen CPU cores

• More than 123 TB/sec aggregate memory bandwidth

• Cray ClusterStor L300

We’ve seen ClusterStor excel over and over again in a variety of use cases, so we expected excellent results.

And we were right!

Here’s what we found:

• The combination of the Azure HB-series VMs and Cray ClusterStor storage provided a highly scalable system delivering an 11.5x improvement in time to solution as the pool of compute VMs was increased from 16 to 400.

• Cray ClusterStor performance peaked at 42 GB/sec (reads) and 62 GB/sec (writes). It also delivered significant differentiation by driving a 66% improvement in application performance as compared to an alternative, high-performance NFS approach.

• In fact, Cray ClusterStor in Azure achieved performance over an Ethernet network that rivals that of the dominant, proprietary HPC interconnects, saving cost without sacrificing performance!

Impressive results indeed — which is exactly what you should expect when you choose the powerful combination of supercomputing and cloud capabilities for the mission critical applications of your organization.

Learn more about Cray ClusterStor in Azure here or talk to your rep to work out a Cray in Azure configuration that best fits your cloud strategy.

Categories: Uncategorized

Is Your HPC System Fully Optimized? + Live Chat

June 11, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

optimization-process

Let’s talk operating systems…

Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.)

You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power.

You’re probably looking at a high-performance file system to ensure that your gigantic swaths of data can be accessed, processed, and stored so your outcomes tell the full story.

You’re most likely using a network interconnect that can really move so that your system can hum, connecting data and processing in a seamless manner, with limited performance hit.

We can go on and on, in terms of drive types, memory choices, and even cabling decisions. The more optimized your system can be for a high-performance workload, the faster and better your results will be.

But what about your software stack, starting with your operating system?

Live tweet chat

Join us for a live online chat at 10 a.m. Tuesday, June 19 — “Do you need an HPC-optimized OS?” — to learn about optimizing your HPC workloads with an improved OS. You can participate using a Twitter, LinkedIn or Facebook account and the hashtag #GetHPCOS.

Sunny Sundstrom and I will discuss what tools to use to customize the environment for your user base, whether there’s value in packaging things together, and what the underlying OS really means for you.

Bring your questions or offer your own expertise.

Moment of truth time

Are you optimizing your system at the operating environment level, like you are across the rest of your system? Are you squeezing every bit of performance benefit out of your system, even when it comes to the OS?

Think about it — if you’re searching for an additional area to drive a performance edge, the operating system could be just the place to look:

  • There are methods to allocate jobs to run on specific nodes tuned for specific workloads — driven at the OS level.
  • You can deploy a lighter-weight OS instance on your working compute node to avoid unnecessarily bringing down the performance of the job, while ensuring your service nodes can adequately run and manage your system.
  • You can decrease jitter of the overall system by using the OS as your control point.

All of this at the OS level.

Take a deeper look at your overall system, including the software stack, and specifically your operating system and operating environment.

You may be pleasantly surprised to find that you can drive even more performance in a seemingly unexpected place.

Until next time,

JBG
@jbgeorge

Add our 6/19 tweet chat to your calendar for further discussion.

VIDEO: 2017 OpenStack Board of Directors Election

January 9, 2017 Leave a comment

Categories: Uncategorized

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

Highlights from OpenStack Summit Barcelona

October 31, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

What a great week at the OpenStack Summit this past week in Barcelona! Fantastic keynotes, great sessions, and excellent hallway conversations.  It was great to meet a number of new Stackers as well as rekindle old friendships from back when OpenStack kicked off in 2010.

A few items of note from my perspective:

OpenStack Foundation Board of Directors Meeting

OpenStack Board In Session

As I mentioned in my last blog, it is the right of every OpenStack member to attend / listen in on each board meeting that the OpenStack Foundation Board of Directors holds.  I made sure to head out on Monday and attend most of the day.  There was a packed agenda so here a few highlights:

  • Interesting discussion around the User Committee project that board member Edgar Magana is working toward, with discussion on its composition, whether members should be elected, and if bylaw changes are warranted.  It was a deep topic that required further time, so the topic was deferred to a later discussion with work to be done to map out the details. This is an important endeavor for the community in my opinion – I will be keeping an eye on how this progresses.
  • A number of strong presentations by prospective gold members were delivered as they made their cases to be added to that tier. I was especially happy to see a number of Chinese companies presenting and making their case.  China is a fantastic growth opportunity for the OpenStack projecct, and it was encouraging to see players in that market discuss all they are doing for OpenStack in the region.  Ultimately, we saw City Network, Deutsche Telekom, 99Cloud and China Mobile all get voted in as Gold members.
  • Lauren Sell (VP of Marketing for the Foundation) spoke on a visionary model to where her team is investigating how our community can engage with other projects in terms of user events and conferences.  Kubernetes, Ceph, and other projects were named as examples.  This is a great indicator of how we’ve evolved, as it highlights that often multiple projects are needed to address actual business challenges.  A strong indicator of maturity for the community.

Two Major SUSE OpenStack Announcements

SUSE advancements in enterprise ready OpenStack made its way to the Summit in a big way this week.

  1. SUSE OpenStack Cloud 7:  While we are very proud to be one of the first vendors to provide an enterprise-grade Newton based OpenStack distribution, this release also offers features like new Container-as-a-Service capabilities and non-disruptive upgrade capabilities.

    Wait, non-disruptive upgrade?  As in, no downtime?  And no service interruptions?

    That’s right – disruption to service is a big no-no in the enterprise IT world, and now SUSE OpenStack Cloud 7 provides you the direct ability to stay live during OpenStack upgrade.

  2. Even more reason to become a COA.  All the buzz around the Foundation’s “Certified OpenStack Administrator” exam got even better this week when SUSE announced that the exam would now feature the SUSE platform as an option.

    And BIG bonus win – if you pass the COA using the SUSE platform, you will be granted

    1. the Foundation’s COA certification
    2. SUSE Certified Administrator in OpenStack Cloud certificatio

That’s two certifications with one exam.  (Be sure to specify the SUSE platform when taking the exam to take advantage of this option.)

There’s much more to these critical announcements so take a deeper look into them with these blogs by Pete Chadwick and Mark Smith.  Worth a read.

 Further Enabling the Enterprise

As you know, enterprise adoption of OpenStack is a major passion of mine – I’ve captured a couple more signs I saw this week of OpenStack continuing to head in the right direction.

  • Progress in Security. On stage this week, OpenStack was awarded the CII, the Core Infrastructure Initiative Best Practices badge.  The CII is a project out of the Linux Foundation project that validates open source projects, specific for security, quality and stability. By winning this award, OpenStack is now validated by a trusted third party and is 100% compliant.  Security FTW!
  • Workload-based Sample Configs.  This stable of assets has been building for some time, but OpenStack.org now boasts a number of reference architectures addressing some of the most critical workloads.  From web apps to HPC to video processing and more, there are great resources on how to get optimize OpenStack for these workloads.  (Being a big data fan, I was particularly happy with the big data resources here.)

I’d be interested in hearing what you saw as highlights as well – feel free to leave your thoughts in the comments section.

OK, time to get home, get rested – and do this all over again in 6 months in Boston.

(If you missed the event this time, OpenStack.org has you covered – start here to check out a number of videos from the event, and go from there.)

Until next time,

JOSEPH
@jbgeorge

 

Boston in 2017!

Renewing Focus on Bringing OpenStack to the Masses

October 17, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Happy Newton Release Week!

This 14th release of OpenStack is one we’ve all been anticipating, with its focus on scalability, resiliency, and overall user experience – important aspects that really matter to enterprise IT organizations, and that help with broader adoption of OpenStack with those users.

(More on that later.)

Six years of Community, Collaboration, and Growth

I was fortunate enough to have been at the very first OpenStack “meeting of the minds” at the Omni hotel in Austin, back in 2010, when just the IDEA of OpenStack was in preliminary discussions. Just a room full of regular everyday people, across numerous industries, who saw the need for a radical new open source cloud platform, and had a passion to make something happen.

And over the years, we’ve seen progress: the creation of the OpenStack Foundation, thousands of contributors to the project, scores of companies throwing their support and resources behind OpenStack, the first steps beyond North America to Europe and Asia (especially with all the OpenStack excitement in India, China, and Japan), numerous customers adopting our revolutionary cloud project, and on and on.

Power to the People

But this project is also about our people.

And we, as a community of individuals, have grown and evolved over the years as well – as have the developers, customers and end users we hope to serve with the project. What started out with a few visionary souls has now blossomed into a community of 62,000+ members from 629 supporting companies across 186 countries.

As a member of the community, I’ve seen positive growth in my own life since then as well.  I’ve been fortunate to have been part of some great companies – like Dell, HPE, and now SUSE.  And I’ve been able to help enterprise customers solve real problems with impactful open source solutions in big data, storage, HPC, and of course, cloud with OpenStack.

And there might have been one other minor area of self-transformation since those days as well, as the graphic illustrates… 🙂

JBG_BeforeAfter

Clearly we – the OpenStack project and the OpenStack people – are evolving for the better.

 

So Where Do We Go From Here?

In 2013, I was able to serve the community by being a Director on the OpenStack Foundation Board. Granted, things were still fairly new – new project ideas emerging, new companies wanting to sponsor, developers being added by the day, etc – but there was a personal focus I wanted to drive among our community.

“Bringing OpenStack to the masses.”

And today, my hope for the community remains the same.

While we celebrate all the progress we have made, here are some of my thoughts on what we, as a community, should continue to focus on to bring OpenStack to the masses.

Adoption with the Enterprise by Speaking their Language

Take a look at the most recent OpenStack User Survey. It provides a great snapshot into the areas that users need help.

  • “OpenStack is great to recommend, however there’s a fair amount of complexity that needs to be tackled if one wishes to use it.”
  • “OpenStack lacks far too many core components for anything other than very specialized deployments.”
  • “Technology is good, but no synergies between the sub-projects.”

2016 data suggests that enterprise customers are looking for these sorts of issues to be addressed, as well as security and management practices keeping pace with new features. And, with all the very visible security breaches in recent months, the enterprise is looking for open source projects to put more emphasis on security.  In fact, many of the customers I engage with love the idea of OpenStack, but still need help with these fundamental requirements they have to deal with.

Designing / Positioning OpenStack to Address Business Challenges

Have you ever wondered why so many tall office buildings have revolving doors? Isn’t a regular door simpler, less complex, and easier to use?  Why do we see so many revolving doors, when access can be achieved so much simpler with other means?

Because revolving doors don’t exist to solve access problems – they solve a heated-air loss problem.

When a door in a tall building is opened, cold air from outside forces it’s way inside when doors are open, pushing warmer / lighter air up – which is then lost through vents at the top of the building. Revolving doors limit that loss incredibly by sealing off portions of outside access when rotating.

Think of our project in the same way.OpenStackIndustries

  • What business challenges can be addressed today / near-term by implementing an OpenStack-based solution?
  • Beyond the customer set looking to build a hosted cloud to resell, what further applications can OpenStack be applied to?
  • How can OpenStack provide an industry-specific competitive advantage to the financial sector?  To healthcare?  To HPC?  To the energy sector?  How about retail or media?

Address the Cultural IT Changes that Need to Happen

I recently read a piece where Jonathan Bryce spoke to the “cultural IT changes that need to occur” – and I love that line of thinking.

Jonathan specifically said “What you do with Windows and Linux administrators is the bigger challenge for a lot of companies. Once you prove the technology, you need policies and training to push people.”

That is spot on.

What we are all working on with OpenStack will fundamentally shift how IT organizations will operate in the future. Let’s take the extra step, and provide guidance to our audiences on how they can evolve and adapt to the coming changes with training, tools, community support, and collaboration. A good example of this is the Certified OpenStack Administrator certification being offered by the Foundation, and training for the COA offered by the OpenStack partner ecosystem.

Further Champion the OpenStack Operator

Operators are on the front lines of implementing OpenStack, and “making things work.” There is no truer test of the validity of our OpenStack efforts than when more and more operators can easily and simply deploy / run / maintain OpenStack instances.

I am encouraged by the increased focus on documentation, connecting developers and operators more, and the growth of a community of operators sharing stories on what works best in practical implementation of OpenStack. And we will see this grow even more at the Operator Summit in Barcelona at OpenStack Summit (details here).

We are making progress here but there’s so much more we can do to better enable a key part of our OpenStack family – the operators.

The Future is Bright

Since we’re celebrating the Newton release, the quote from Isaac Newton on looking ahead is seems fitting…

“To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.”

When it comes to where OpenStack is heading, I’m greatly optimistic.  As it has always been, it will not be easy.  But we are making a difference.

And with continued and increased focus on enterprise adoption, addressing business challenges, aiding in the cultural IT change, and an increased focus on the operator, we can go to the next level.

We can bring OpenStack to the masses.

See you in Barcelona in a few weeks.

Until next time,

JOSEPH
@jbgeorge