Archive

Posts Tagged ‘tech’

HPE Discover 2021: Can’t-miss sessions showcase how HPC is powering digital transformation

June 4, 2021 Leave a comment

This is a duplicate of a high-performance computing blog I authored for HPE, originally published at the HPE Newsroom on June 1, 2021.

Join us at HPE Discover 2021 to learn how innovative high-performance computing (HPC) solutions and technologies are driving digital transformation from enterprise to edge.

BLOG2-HPC_MCS.png

As our premier virtual event, HPE Discover 2021 is where the next wave of digital transformation begins—and high-performance computing is playing a critical role in that transformation. Join us for three days packed with actionable live and on-demand sessions, key announcements, and much more.

Register here for HPE Discover 2021.

How HPC and supercomputing is accelerating digital transformation

Welcome to the exascale era, where HPE HPC solutions scale up or scale out, on-premises or in the cloud, with purpose-built storage and software. Today, HPC is powering innovation for artificial intelligence, scientific research, drug discovery, advanced computing, engineering, and much more. This means all your workloads within your economic requirements.

Ready to learn more? Here’s a list of recommended spotlight and breakout sessions plus demos to help you plan your Discover  2021 experience—and dive into all things HPC.

Top recommended HPC sessions

Unlocking insight to drive Innovation from edge to exascale   SL4447

The next wave of digital transformation will be driven by unlocking the full potential of data using purpose built software and infrastructure that speeds time to adoption and rapid scaling of HPC, AI, and analytics. Please join us to learn how HPE is enabling this data driven future from edge-to-core-cloud, at any scale—for every enterprise.

Challenges scaling AI into production? Learn how to fix them   B4452

If you’re struggling to unlock the full potential of AI, spend 8 minutes with us to learn how to move past the most common challenges and achieve AI at any scale.

Accelerating your AI training models   B4424

Training AI models are often both compute and data intensive due to the massive size and complexity of new algorithms. In this session learn how new innovation from HPE workload optimized systems, such as in-memory capability paired with mixed acceleration, dramatically speeds up ingest and time to production for these AI models.

From the world’s largest AI machines to your first 17″ AI edge data center   DEMO4459

In the age of insight AI is everywhere. HPE provides end-to-end AI infrastructure for all environments for organizations of all sizes for all industries or mission areas. This session gives an overview of the portfolio and demonstrates AI solution examples from initial adoption to large scale production.

Unleash the power of analytics for your enterprise at the edge   B4451

Unlocking insights from data has never been more critical. Soon the majority of new enterprise data will be created outside of the data center or public cloud, at the edge. Learn how HPE helps customers gain insights with AI at the edge in a variety of use cases and with the flexibility to consume as-a-service.

Hewlett Packard Labs and Carnegie Clean Energy revolutionize wave energy with reinforcement learning   B4365

Wave energy capture brings the unique challenges of complex wave dynamics, modelling errors, and changes of generator dynamics. Hear from Hewlett Packard Labs and Carnegie Clean Energy on their mission to develop a self-learning wave energy converter using deep reinforcement learning technology—pushing trustworthiness in the next generation of AI.

 Live demos

We will also be featuring live on-location demos at Chase Center and the Mercedes Formula 1 Factory, scheduled to take place on June 22 or 23 (depending on which region you’re joining from). During the times specified below, HPE will have experts available to help answer your questions.

  • AMS: 11:00 AM – 11:45 PM PDT and 11:45 AM – 12:30 PM PDT on Tuesday (Day 1)
  • APJ: 2:30 PM – 3:15PM JST and 3:15 PM – 4:30 PM JST on Wednesday (Day 1)
  • EMEA: 12:30 PM – 1:15 PM CET and 1:15 PM – 2:00 PM on Wednesday (Day 1)

Interested in other sessions? We invite you to explore the full line-up of sessions on our content catalog and build your own agenda. You can also view the agenda for each region here

Build your playlist

Want to build your own playlist? Here’s how. Once registered for HPE Discover, log in to the virtual platform to view each of the keynotes, sessions, demos, etc. You can filter based on content type, areas of interest, or keyword search, etc. Then simply click on the “+” icon to add the item to your My Playlist. You can also download your playlist into your preferred personal calendar.

Register now.

We look forward to seeing you virtually at HPE Discover 2021!

Is Your HPC System Fully Optimized? + Live Chat

June 11, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

optimization-process

Let’s talk operating systems…

Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.)

You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power.

You’re probably looking at a high-performance file system to ensure that your gigantic swaths of data can be accessed, processed, and stored so your outcomes tell the full story.

You’re most likely using a network interconnect that can really move so that your system can hum, connecting data and processing in a seamless manner, with limited performance hit.

We can go on and on, in terms of drive types, memory choices, and even cabling decisions. The more optimized your system can be for a high-performance workload, the faster and better your results will be.

But what about your software stack, starting with your operating system?

Live tweet chat

Join us for a live online chat at 10 a.m. Tuesday, June 19 — “Do you need an HPC-optimized OS?” — to learn about optimizing your HPC workloads with an improved OS. You can participate using a Twitter, LinkedIn or Facebook account and the hashtag #GetHPCOS.

Sunny Sundstrom and I will discuss what tools to use to customize the environment for your user base, whether there’s value in packaging things together, and what the underlying OS really means for you.

Bring your questions or offer your own expertise.

Moment of truth time

Are you optimizing your system at the operating environment level, like you are across the rest of your system? Are you squeezing every bit of performance benefit out of your system, even when it comes to the OS?

Think about it — if you’re searching for an additional area to drive a performance edge, the operating system could be just the place to look:

  • There are methods to allocate jobs to run on specific nodes tuned for specific workloads — driven at the OS level.
  • You can deploy a lighter-weight OS instance on your working compute node to avoid unnecessarily bringing down the performance of the job, while ensuring your service nodes can adequately run and manage your system.
  • You can decrease jitter of the overall system by using the OS as your control point.

All of this at the OS level.

Take a deeper look at your overall system, including the software stack, and specifically your operating system and operating environment.

You may be pleasantly surprised to find that you can drive even more performance in a seemingly unexpected place.

Until next time,

JBG
@jbgeorge

Add our 6/19 tweet chat to your calendar for further discussion.

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

Thoughts from 2010 Gartner Data Center Conference (Part 2)

December 12, 2010 Leave a comment

Hello all – hope you’re having a good Saturday / Sunday wherever you might be.

Wanted to finish putting down thoughts, insights, etc from my time at the Gartner Data Center conference this past week.  (You can read Part 1 here –  https://jbgeorge.net/2010/12/11/thoughts-from-2010-gartner-data-center-conference-part-1/.)

  • We need to understand the success / real world utilization of ITIL and other benchmark frameworks – are they working?
  • More and more, in the era of cloud, we are finding it is no longer necessary to keep an individual system up at all costs, as long as overall compute and storage integrity are maintained
  • Traditional management models assume that systems should be managed so that failure should rarely happen. Newer models assume that failure WILL happen, and focus on shortest MTTR (mean time to recovery / repair).
  • Traditional models try to implement pervasive automation, whereas newer models focus on selective automation.  Why must we automate / virtualize / etc everything?  Choose wisely based on criticality and true need.
  • We’ve heard of JEOS – the “just enough” operating system.  Gartner spoke of “just enough” practice vs “best” practice.   Are we at the era of “just enough?”
  • Again, reiteration of the need of DevOps skillset.
  • Organizational alignment is still a key facet of moving the IT organization.
  • “We are only at the end of the beginning” of the cloud era.  Watch for Cloud 2.0 in the years ahead (market based computing, hybrid clouds the norm, etc)
  • Still a lot of talk about the Big Four (HP, CA, IBM, BMC) – they were slow to jump on w virtualization, but more aggressive with cloud.
  • Definite focus on the network being a key management focal point.  Similar to the theory that your band’s ripping concert is only as good as the quality of your sound man.
  • The recession will be viewed in hindsight as a pivot event for the server market – paradigm shifts, vendor repositioning, etc.
  • Some important trends to watch going forward: big data, unified communication, client virtualization, compute density / scaling vertically, converged fabrics

Another great event – look forward to next year. 

Until next time,

JBGeorge
@jbgeorge

The Cloud and Cotton Candy

October 11, 2010 Leave a comment

  
The other day I was asked a far too familiar question…
  

          “What is a cloud?”
  
Oh, man – here we go…

Like a kid in a candy store, I began to discuss the origins and evolution of IT.  Servers, storage, networks, virtualization… on and on, reveling in exercising my cloud muscle.  IaaS, PaaS, SaaS… every “*aaS” I could think of!  Vendors, strategies, theories, models… it was an amazing tribute to one of our most talked about spaces, if I do say so myself.

    
The response from the interested party?
     

          “Well, it kinda looks like cotton candy.  Hey!  Let’s get some cotton candy!” 

  
At this point, I should note that the question came from my five year old.  And the question was actually phrased, “Daddy, what is a cloud?”

  
Often, those of us in the tech world that live and breathe the bleeding edge forget to translate all this cool gadgetry into tangible benefits for the rest of the world.  Not that the “rest of the world” couldn’t grasp the technical concepts, but great technologies are often great because of the simple benefits they bring.

So again – “what is a cloud?”
  

  • For the small business, cloud could mean more focus on core competencies and less on IT.
  • Cloud can help drive better efficency when it comes to power consumption and carbon footprint, helping the environment.
  • In the case of the State of Minnesota who has agreed to begin adopting cloud computing, it could mean less long-run overhead and costs associated with running the state – which could turn into more services, less taxes, etc. (Learn more about this development at http://bit.ly/bz47j4.)

  
Now there’s no need to try and translate every tech concept into layman’s terms, but for better or worse, cloud’s getting a lot of play.  It’s important we remember to articulate the value of these amazing technologies into terms that demonstrate how it makes the world better.

How have you answered the question “What is a cloud?” with non-techies you’ve encountered? 

Share your stories here or tweet me on twitter (@jbgeorge).

OK, then – I’m off to get some cotton candy.

Until next time,

Joseph B George
@jbgeorge / www.jbgeorge.net