Archive

Posts Tagged ‘HPC’

A more sustainable world with high-performance computing

June 14, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on June 14, 2021.

The new era of computing arrives with an increased emphasis on the importance of sustainability. Discover how HPE is delivering on its sustainability commitments with innovative, environmentally friendly HPC and AI solutions plus flexible as-a-service and financing options.

HPE-HPC-AI-sustainability-blog.png

The data explosion is not slowing down. In fact, by 2025, IDC predicts that worldwide data will grow 61%, to 175 zettabytes (175 x 1021 bytes).1 This explosion of built-in intelligence, hyper-connectivity, and data from the edge is reshaping markets, disrupting every industry, and transforming how we live and work.

Answers to some of society’s most pressing challenges across medicine, climate change, space, and more are buried in massive pools of data. As a result, everyone—from small private companies to the largest government labs—will need technology that can perform quadrillions of operations per second to turn massive amounts of data into actionable insights. These new converged workloads are creating new workflows that drive the need for real-time, extreme scale systems.

Welcome to the exascale era

This new era of computing is quickly becoming the new norm for modern business operations. As such, it it requires an extension of the power of supercomputing with the flexibility and extensibility of cloud technologies. The capabilities of these exascale systems can enable new areas of research and discovery that will support business imperatives for security, economic competitiveness, and societal good for years to come. In fact, HPC exascale technology powers research in the areas of climate study, renewable energy, new materials research that can be used in applications to more sustainable living.

Achieving these new levels of computing power proves energy intensive and calls for an approach addressing both the costs and environmental implications of this level of power consumption.

Processors are now exceeding 200W and GPUs are operating at over 300W with components continuing to get even more power hungry going forward. High-density rack configurations in the high performance computing space are moving from 20kW to 40kW—with estimates reaching up beyond 70kW per rack in 2022. With server lifecycles lasting three-to-five years, you need to choose systems and adopt cooling strategies that can stand the test of time while still performing at the highest levels.

In the quest for higher levels of performance, it can be easy to lose sight of the fact that efficient energy usage is a key enabler of the mission to accomplish more work in the same footprint for less cost. Energy efficiency of both the data center facility and the IT equipment itself is of equal concern to ensure that the majority of the energy consumed is put to good use for actual computing. In other words, the power that is delivered to the productive portion of the system is the power that matters most.

A holistic approach coupled with a strong commitment to sustainability

As leaders in the HPC industry, HPE is producing a new breed of computing systems that are more intelligently designed and efficient than ever before. Our HPC and AI solutions encompass a supercomputing architecture across compute, interconnect, software, storage, and services—delivered on premises, hybrid, or as-a-service

We are at the forefront of transforming industries with technology and enabling our customers to harness data at exascale to advance scientific discovery and solve the world’s toughest challenges. HPE is an industry leader in sustainability, delivering customer outcomes with smart, future-proofed technology. By offering as-a-service, we are expanding options to access to these powerful solutions.

With each generation of HPC technology, we strive to dramatically improve the processing power of next-generation systems while reducing energy consumption. Our fresh approach to energy efficiency is based on an ongoing promise to fundamentally rethink the way large-scale systems are designed and built. For instance, to counteract the high amount of heat generated from the computationally-demanding activities, our solutions have liquid cooling techniques to efficiently cool large-scale systems, lowering energy needs and cost of operations.

This video confirms how and why sustainability is at the heart of our commitment to developing a high-performance computing ecosystem.https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FYmoBQAnOozY&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DYmoBQAnOozY&image=http%3A%2F%2Fi.ytimg.com%2Fvi%2FYmoBQAnOozY%2Fhqdefault.jpg&key=b0d40caa4f094c68be7c29880b16f56e&type=text%2Fhtml&schema=youtube

Sustainable supercomputing in action

HPE and Cray have over 70 supercomputers on the Green 500 list of the world’s most energy-efficient supercomputers, with features that optimize energy effectiveness in order to do exponentially more with less.

Physical space is another major concern in the HPC community. All of our solutions are density-optimized to pack incredible performance into a small data center space. Our storage options also work to minimize space and power taken up by storage to focus the largest amount of performance output on compute. Our industry-leading designs can cut the amount of space needed to run operations by half, resulting in energy efficiencies, maximizing stability and sustainability and future-proofing customer operations.

From a software perspective, we offer customers the ability to automatically collect and analyze power metrics for all hardware—CPU, GPU, rack, chassis, nodes, rack AC, bulk DC, and CDUs. The software also supports HPE ARCS, so users can analyze the power and cooling metrics and react to preconfigured alerts such as water leakage, power supply failure, or overheating.

Additionally, we are starting to add ML AIOps capabilities to our HPC system management software which offers administrator monitoring and management of all aspects of a cluster, including power and cooling.

Supporting HPE’s transition to an everything as-a-service company, HPC offers on HPE GreenLake. Our as-a-service model is enabling customers to reduce operational inefficiencies due to low utilization rates, overprovisioning and obsolete equipment by up to 30-to-40%. The good news is, the HPC community can now also leverage the benefits of this model.

Finally, we work closely with HPE Financial Services through the Asset Upcycling Services to help customers transition from their current infrastructure with responsible retiring, removal and disposal of IT equipment, and refurbishing IT to prevent the need for disposal.

Energy efficiency and environmental footprint play a crucial role in the long-term sustainability of HPC

Improving the energy efficiency of systems solving some of the world’s most complex problems will guarantee the capacity to achieve even greater computing possibilities to drive positive social change.

At HPE, we have transformed all aspects of HPC to enhance computing power without prohibitive increases in energy that can lead to higher costs of operation and cause irrevocable environmental damage. We lead the market in energy efficiency research and power-efficient design.

Make no mistake. We are here to define the new era of computing.

HPE Discover 2021: Can’t-miss sessions showcase how HPC is powering digital transformation

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on June 1, 2021.

Join us at HPE Discover 2021 to learn how innovative high-performance computing (HPC) solutions and technologies are driving digital transformation from enterprise to edge.

BLOG2-HPC_MCS.png

As our premier virtual event, HPE Discover 2021 is where the next wave of digital transformation begins—and high-performance computing is playing a critical role in that transformation. Join us for three days packed with actionable live and on-demand sessions, key announcements, and much more.

Register here for HPE Discover 2021.

How HPC and supercomputing is accelerating digital transformation

Welcome to the exascale era, where HPE HPC solutions scale up or scale out, on-premises or in the cloud, with purpose-built storage and software. Today, HPC is powering innovation for artificial intelligence, scientific research, drug discovery, advanced computing, engineering, and much more. This means all your workloads within your economic requirements.

Ready to learn more? Here’s a list of recommended spotlight and breakout sessions plus demos to help you plan your Discover  2021 experience—and dive into all things HPC.

Top recommended HPC sessions

Unlocking insight to drive Innovation from edge to exascale   SL4447

The next wave of digital transformation will be driven by unlocking the full potential of data using purpose built software and infrastructure that speeds time to adoption and rapid scaling of HPC, AI, and analytics. Please join us to learn how HPE is enabling this data driven future from edge-to-core-cloud, at any scale—for every enterprise.

Challenges scaling AI into production? Learn how to fix them   B4452

If you’re struggling to unlock the full potential of AI, spend 8 minutes with us to learn how to move past the most common challenges and achieve AI at any scale.

Accelerating your AI training models   B4424

Training AI models are often both compute and data intensive due to the massive size and complexity of new algorithms. In this session learn how new innovation from HPE workload optimized systems, such as in-memory capability paired with mixed acceleration, dramatically speeds up ingest and time to production for these AI models.

From the world’s largest AI machines to your first 17″ AI edge data center   DEMO4459

In the age of insight AI is everywhere. HPE provides end-to-end AI infrastructure for all environments for organizations of all sizes for all industries or mission areas. This session gives an overview of the portfolio and demonstrates AI solution examples from initial adoption to large scale production.

Unleash the power of analytics for your enterprise at the edge   B4451

Unlocking insights from data has never been more critical. Soon the majority of new enterprise data will be created outside of the data center or public cloud, at the edge. Learn how HPE helps customers gain insights with AI at the edge in a variety of use cases and with the flexibility to consume as-a-service.

Hewlett Packard Labs and Carnegie Clean Energy revolutionize wave energy with reinforcement learning   B4365

Wave energy capture brings the unique challenges of complex wave dynamics, modelling errors, and changes of generator dynamics. Hear from Hewlett Packard Labs and Carnegie Clean Energy on their mission to develop a self-learning wave energy converter using deep reinforcement learning technology—pushing trustworthiness in the next generation of AI.

 Live demos

We will also be featuring live on-location demos at Chase Center and the Mercedes Formula 1 Factory, scheduled to take place on June 22 or 23 (depending on which region you’re joining from). During the times specified below, HPE will have experts available to help answer your questions.

  • AMS: 11:00 AM – 11:45 PM PDT and 11:45 AM – 12:30 PM PDT on Tuesday (Day 1)
  • APJ: 2:30 PM – 3:15PM JST and 3:15 PM – 4:30 PM JST on Wednesday (Day 1)
  • EMEA: 12:30 PM – 1:15 PM CET and 1:15 PM – 2:00 PM on Wednesday (Day 1)

Interested in other sessions? We invite you to explore the full line-up of sessions on our content catalog and build your own agenda. You can also view the agenda for each region here

Build your playlist

Want to build your own playlist? Here’s how. Once registered for HPE Discover, log in to the virtual platform to view each of the keynotes, sessions, demos, etc. You can filter based on content type, areas of interest, or keyword search, etc. Then simply click on the “+” icon to add the item to your My Playlist. You can also download your playlist into your preferred personal calendar.

Register now.

We look forward to seeing you virtually at HPE Discover 2021!

Perlmutter powers up: Meet the new next-gen supercomputer at Berkeley Labs

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on May 27, 2021.

Just unveiled by the National Energy Research Scientific Computing Center at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, Perlmutter is now ready to play a key role in advanced computing, AI, data science, and more. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE.

I’m happy to share that the National Energy Research Scientific Computing Center (NERSC), a high-performance-computing user facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, announced that the phase 1 installation of its new supercomputer, Perlmutter, is now complete.

HPC-Cray Liquid Cooled Supercomputer.png

An important milestone for NERSC, the Department of Energy (DOE), HPE, and the HPC community

Perlmutter is the largest system that’s been built using the HPE Cray EX supercomputer shipped to date. It’s the first in a series of HPE Cray EX supercomputers to be delivered to the DOE for important research in areas such as climate modeling, early universe simulations, cancer research, high-energy physics, protein structure, material science, and more. Over the next few years, HPE will deliver three exascale supercomputers to the DOE, culminating in the two exaflop El Capitan system for Lawrence Livermore National Laboratory. 

The HPE Cray EX supercomputer is HPE’s first exascale-class HPC system. Perlmutter is using many of its exascale era innovations. Designed from the ground up for converged HPC and AI workloads, the HPE Cray EX supports a heterogenous computing model, allowing for the mixing of compute blades across processor and GPU architectures. This capability is important for the NERSC usage models. 

Perlmutter will be a heterogeneous system with both GPU-accelerated and CPU-only nodes to support  NERSC applications and users, including a mix of those who utilize GPUs and those who don’t.

Phase 1 is the GPU-accelerated portion of Perlmutter and contains 12 HPE Cray EX cabinets with 1500 AMD EPYC nodes each with 4 NVIDA A100 GPUs for a total of 6000 GPUs. Phase 2 will add an additional 3000 dual socket AMD EPYC CPU only nodes later this year housed in an additional 12 cabinets.

Perlmutter takes advantage of the HPE Slingshot Interconnect that incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility. Ethernet compatibility is important to NERSC’s use of the supercomputer as it allows new paths for connecting the system more broadly to their internal file system as well as the to the external internet. 

Perlmutter also enables a direct connection to the Cray ClusterStor E1000, an all-flash Lustre-based storage system. ClusterStor E1000 is the fastest storage system of its kind, transferring data at more than 5 terabytes/sec. The Perlmutter deployment is currently the world’s largest all-flash system—with 35 petabytes of usable storage. 

HPE Cray EX is more than just hardware innovations

HPE Cray software is designed to meet the needs of this new era of computing, where systems are growing larger and more complex with the need to accommodate converged HPC, AI, analytics, modeling ,and simulation workloads in a cloudlike environment. The HPE Cray software deployed with Perlmutter will be key to NERSC’s ability to deliver a new type of supercomputing experience to their users who are becoming more cloud-aware and cloud-focused.

The exascale era is just beginning

We here at HPE are very excited and committed to this journey with the DOE, as we help shepherd in a new class of supercomputers that will drive scientific research at a scale that is barely fathomable today. 

Perlmutter is the important first step on this path and we look forward to seeing the many benefits of diverse scientific discoveries that NERSC will deliver with its new HPE Cray EX supercomputer.

Read the complete NERSC news story: Berkeley Lab Deploys Next-Gen Supercomputer, Perlmutter, Bolstering U.S. Scientific Research

Is Your HPC System Fully Optimized? + Live Chat

June 11, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

optimization-process

Let’s talk operating systems…

Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.)

You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power.

You’re probably looking at a high-performance file system to ensure that your gigantic swaths of data can be accessed, processed, and stored so your outcomes tell the full story.

You’re most likely using a network interconnect that can really move so that your system can hum, connecting data and processing in a seamless manner, with limited performance hit.

We can go on and on, in terms of drive types, memory choices, and even cabling decisions. The more optimized your system can be for a high-performance workload, the faster and better your results will be.

But what about your software stack, starting with your operating system?

Live tweet chat

Join us for a live online chat at 10 a.m. Tuesday, June 19 — “Do you need an HPC-optimized OS?” — to learn about optimizing your HPC workloads with an improved OS. You can participate using a Twitter, LinkedIn or Facebook account and the hashtag #GetHPCOS.

Sunny Sundstrom and I will discuss what tools to use to customize the environment for your user base, whether there’s value in packaging things together, and what the underlying OS really means for you.

Bring your questions or offer your own expertise.

Moment of truth time

Are you optimizing your system at the operating environment level, like you are across the rest of your system? Are you squeezing every bit of performance benefit out of your system, even when it comes to the OS?

Think about it — if you’re searching for an additional area to drive a performance edge, the operating system could be just the place to look:

  • There are methods to allocate jobs to run on specific nodes tuned for specific workloads — driven at the OS level.
  • You can deploy a lighter-weight OS instance on your working compute node to avoid unnecessarily bringing down the performance of the job, while ensuring your service nodes can adequately run and manage your system.
  • You can decrease jitter of the overall system by using the OS as your control point.

All of this at the OS level.

Take a deeper look at your overall system, including the software stack, and specifically your operating system and operating environment.

You may be pleasantly surprised to find that you can drive even more performance in a seemingly unexpected place.

Until next time,

JBG
@jbgeorge

Add our 6/19 tweet chat to your calendar for further discussion.

8 Letters to HPC Success – SOFTWARE

March 26, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

alphabet-cubes

These days, more and more organizations are using high-performance computing systems to seek out answers to big questions. From detecting fraud in financial transactions within milliseconds, to scouring the universe for life, to working toward a cure for cancer, the applications working on these questions require an infrastructure that allows for numerous cores, large swaths of data, and rapid results.

These systems are carefully designed to optimize application performance at scale. Processors, memory, storage, networking — all carefully selected and implemented to drive optimal performance.

However, far too many vendors ignore the impact of another key area of the infrastructure stack … the SOFTWARE.

Software is a key component and a remarkable enabler for gleaning optimal performance from an application and a system. And that means getting that much more performance out of your system, getting you to insights and answers faster.

Along with fine-tuning your hardware infrastructure, imagine the impacts of an HPC-optimized software OS and operating environment. You could witness the impact of lower jitter, more focused intelligence built into to the various levels of the hardware system, and greater levels of job deployment flexibility.

Or think of the impact of a unified, cross-processor programming environment, designed to help your application developers get the most of their HPC applications — a programming interface that gives them broad control and flexibility when it comes to porting, compiling, debugging and optimizing the applications that get you results.

These software applications are the reality for today’s Cray customers.

From the Cray Linux® Environment to the Cray Programming Environment, from powerful systems management to amazing storage telemetry, and from emerging cloud capabilities to effective analytics packages, we enable our customers to take performance to the next level with our comprehensive software portfolio.

Here at Cray, we understand the power of the software stack and have been committed to building the best for several decades. In addition to our work with our ecosystem of software partners and open source communities, we remain big believers in rolling up our sleeves and getting into the code. Our software engineering teams work hand in hand with our system hardware teams to ensure Cray systems are optimized at all levels, and find ways to extend the reach of our partners and open-source projects.

And there’s no slowing down in sight.

It’s time you leveraged the power of HPC-optimized software to answer the big questions you have in your business.

In the coming weeks we’ll bring you a more in-depth look at Cray’s HPC-optimized software stack with blogs, videos, webinars and other helpful tools so that you can maximize the performance of your Cray systems and applications.

Learn more about Cray software.

HPC: Enabling Fraud Detection, Keeping Drivers Safe, Helping Cure Disease, and So Much More

November 14, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Ask kids what they want to be when they grow up, and you will often hear aspirations like:

  • “I want to be an astronaut, and go to outer space!”
  • “I want to be a policeman / policewoman, and keep people safe!”
  • “I want to be a doctor – I could find a cure for cancer!”
  • “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
  • “I want to be an engineer in a high performance computing department!”

OK, that last one rarely comes up.

Actually, I’ve NEVER heard it come up.

But here’s the irony of it all…

That last item is often a major enabler for all the other items.

Surprised?  A number of people are.

For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.

The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.

medicalHPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.

Here are a few examples:

  • Do you enjoy driving a car or other vehicle?  Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver.  Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
  • Have you fueled your car with gas recently?  Thank an HPC department at the energy / oil & gas company.  While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around.  Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort.  With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
  • Have you been treated for illnesses with medical innovation?  Thank an HPC department at the pharmaceutical firm and associated research hospitals.  Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals.  In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
  • Have you ever been proactively contacted by your bank on suspected fraud on your account?  Thank an HPC department at the financial institution.  Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks.  The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.

And the list goes on and on.

  • Security enablementspace
  • Banking risk analysis
  • Space exploration
  • Aircraft safety
  • Forecasting natural disasters
  • So much more…

If it’s a tough problem to solve, more likely than not, HPC is involved.

And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.

HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade.  Open source is a major staple of this group of users, especially with deep adoption of Linux.

You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”)  I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.

Business CommunicationHPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model.  Here are a few examples:

This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more.  From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around.  (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)

High performance computing: take a second look – it may be much more than you originally thought.

And maybe it can help revolutionize YOUR industry.

Until next time,

JOSEPH
@jbgeorge

Thoughts on the Spring 2012 OpenStack Design Summit

.

The Dell OpenStack-Powered Cloud Solution - sweet!It’s been a couple of weeks since the OpenStack summit took place in San Francisco.  It was a great one, and I’m finally getting some time to put down a few thoughts about this year’s show. 

The company I work for, Dell, chose to sponsor again, which was great.  That would make five OpenStack conferences in a row, including the first one in Austin before OpenStack was announced.

It was great to see all the familiar faces, some with new companies.  And there was a number of new faces, which is a great indicator of the progress the OpenStack movement is making.  In fact, in the first keynote delivered by Jonathan Bryce, he asked for a show of hands of those who had never been to an OpenStack Summit before – I ballparked it at about 25% of the room as new! 

Some interesting takeaways from the conference:

  • The user community showed up A nice OpenStack crowd!
      
    The topic of users has been coming up at our local Austin OpenStack meetup often, and I was glad to see a number of inquisitive users come to the show to learn about using OpenStack in operation.  Users are an important part of our communit’y’s evolution, and it was good to see that group out in force to have their voices heard.
      
  • HPC as a cloud use case
      
    In a number of user sessions, high performance computing came up as a use case on OpenStack.  This has not been a space where I would have expected HPC to come up as a technology, but in thinking about it, it makes sense.  Similar to other spaces, the HPC communities are looking for more flexible, extensible platforms to build their systems on.
      
  • More user adoption of Crowbar
      
    Dell has been at the forefront of bare metal provisioning of multi-node OpenStack clouds since the advent of OpenStack, and every conference featured Dell doing bare metal deployments live.   It was great to hear about a number of methods of deployment that users were using, but also enlightening to know about all the users using Crowbar that we weren”t even aware of.  (It’s an open source community so that happens. 🙂 )   We’re commited to continuing to drive Crowbar as a deployment / mgmt / configuration framework, and it’s good to see the community adopting it as a platform.
      
  • www.Dell.com/OpenStackContinuing interest in the Dell OpenStack-Powered Cloud Solution.
      
    Lots of good work is being done all over the community – software, services, and public cloud featuring OpenStack.  But I was happy that Dell was still clearly focused on being a central provider of OpenStack as an on-premise, cloud solution, whether private cloud for IT, or a public cloud option for service providers to offer.   Along with the announcement of the Emerging Solutions Ecosystem, which features a number of Dell partners like Canonical, enStratus, and Mirantis, there were a number of great discussions on how customers could get going on OpenStack asap.
      

And there’s a ton more that I’m not covering – the foundation, user group formation, hypervisor talk, etc, etc, etc – I’ll let you do that.

Drop me a comment about some of the things that you took away from the summit.  There was a lot to be excited about.

Already looking forward to the next summit in the fall.

Until next time,

JBGeorge
@jbgeorge

More info: