Archive

Archive for the ‘open source’ Category

Is Your HPC System Fully Optimized? + Live Chat

June 11, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

optimization-process

Let’s talk operating systems…

Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.)

You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power.

You’re probably looking at a high-performance file system to ensure that your gigantic swaths of data can be accessed, processed, and stored so your outcomes tell the full story.

You’re most likely using a network interconnect that can really move so that your system can hum, connecting data and processing in a seamless manner, with limited performance hit.

We can go on and on, in terms of drive types, memory choices, and even cabling decisions. The more optimized your system can be for a high-performance workload, the faster and better your results will be.

But what about your software stack, starting with your operating system?

Live tweet chat

Join us for a live online chat at 10 a.m. Tuesday, June 19 — “Do you need an HPC-optimized OS?” — to learn about optimizing your HPC workloads with an improved OS. You can participate using a Twitter, LinkedIn or Facebook account and the hashtag #GetHPCOS.

Sunny Sundstrom and I will discuss what tools to use to customize the environment for your user base, whether there’s value in packaging things together, and what the underlying OS really means for you.

Bring your questions or offer your own expertise.

Moment of truth time

Are you optimizing your system at the operating environment level, like you are across the rest of your system? Are you squeezing every bit of performance benefit out of your system, even when it comes to the OS?

Think about it — if you’re searching for an additional area to drive a performance edge, the operating system could be just the place to look:

  • There are methods to allocate jobs to run on specific nodes tuned for specific workloads — driven at the OS level.
  • You can deploy a lighter-weight OS instance on your working compute node to avoid unnecessarily bringing down the performance of the job, while ensuring your service nodes can adequately run and manage your system.
  • You can decrease jitter of the overall system by using the OS as your control point.

All of this at the OS level.

Take a deeper look at your overall system, including the software stack, and specifically your operating system and operating environment.

You may be pleasantly surprised to find that you can drive even more performance in a seemingly unexpected place.

Until next time,

JBG
@jbgeorge

Add our 6/19 tweet chat to your calendar for further discussion.

8 Letters to HPC Success – SOFTWARE

March 26, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

alphabet-cubes

These days, more and more organizations are using high-performance computing systems to seek out answers to big questions. From detecting fraud in financial transactions within milliseconds, to scouring the universe for life, to working toward a cure for cancer, the applications working on these questions require an infrastructure that allows for numerous cores, large swaths of data, and rapid results.

These systems are carefully designed to optimize application performance at scale. Processors, memory, storage, networking — all carefully selected and implemented to drive optimal performance.

However, far too many vendors ignore the impact of another key area of the infrastructure stack … the SOFTWARE.

Software is a key component and a remarkable enabler for gleaning optimal performance from an application and a system. And that means getting that much more performance out of your system, getting you to insights and answers faster.

Along with fine-tuning your hardware infrastructure, imagine the impacts of an HPC-optimized software OS and operating environment. You could witness the impact of lower jitter, more focused intelligence built into to the various levels of the hardware system, and greater levels of job deployment flexibility.

Or think of the impact of a unified, cross-processor programming environment, designed to help your application developers get the most of their HPC applications — a programming interface that gives them broad control and flexibility when it comes to porting, compiling, debugging and optimizing the applications that get you results.

These software applications are the reality for today’s Cray customers.

From the Cray Linux® Environment to the Cray Programming Environment, from powerful systems management to amazing storage telemetry, and from emerging cloud capabilities to effective analytics packages, we enable our customers to take performance to the next level with our comprehensive software portfolio.

Here at Cray, we understand the power of the software stack and have been committed to building the best for several decades. In addition to our work with our ecosystem of software partners and open source communities, we remain big believers in rolling up our sleeves and getting into the code. Our software engineering teams work hand in hand with our system hardware teams to ensure Cray systems are optimized at all levels, and find ways to extend the reach of our partners and open-source projects.

And there’s no slowing down in sight.

It’s time you leveraged the power of HPC-optimized software to answer the big questions you have in your business.

In the coming weeks we’ll bring you a more in-depth look at Cray’s HPC-optimized software stack with blogs, videos, webinars and other helpful tools so that you can maximize the performance of your Cray systems and applications.

Learn more about Cray software.

Running for the OpenStack Board: “All In” on OpenStack

January 5, 2017 Leave a comment

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

In thinking about the OpenStack community, our approach to the project going forward, and the upcoming Board elections, I’m reminded of a specific hand of the poker game Texas Hold ‘Em I observed a few years back between two players.

As one particular hand began, both players had similar chip stacks, and were each dealt cards that were statistically favorable to win.

The hand played out like most other hands – the flop, the turn, the river, betting, calling, etc.  And as the game continued toward its conclusion, those of us observing the game could see that one player was playing with the statistically better cards, and presumably the win.

But then the second player made a bold move that turned everything on its head.

He went “all in.”

The “all in” move in poker is one that commits all of your chips to the pot, and often requires your opponent to make a decision for most or all of their chips. It is an aggressive move in this scenario.

After taking some time to consider his options, the first player ultimately chose to fold his strong cards and cut his perceived losses, allowing the other player to claim the winnings.

And this prize can be claimed almost completely because of the “all in” strategy.

Clearly, going “all in” can be a very strong move indeed.

 

Decision Time in OpenStack

Next week – Monday, January 9 through Friday, January 13 – is an important week for the OpenStack community, as we elect the 2017 Individual Representatives to the Foundation’s Board of Directors.

I’m honored to have been nominated as a candidate for Board Director, to potentially serve the community again, as I did back in 2013.

Back in the summer of 2010, I was fortunate to be one of the few in the crowded ball room at the Omni Hotel in Austin, Texas, witnessing the birth of the OpenStack project. And it is amazing to see how far it has come – but with a tremendous amount of work yet to do.

allinOver the years, we’ve been fortunate to celebrate tremendous wins and market excitement. Other times, there were roadblocks to overcome.  And similar to the aforementioned poker game, we often had to analyze “the hand” we were dealt, “estimate the odds” of where cloud customers and the market was headed, and position ourselves to maximize chances for success – often trusting our instinct, when available data was incomplete at best.

And, as with many new projects that are in growth phase, our community was often put in a position to re-confirm our commitment to our mission. And our response was resounding and consistent on where we stood….

“All in.”

 

Remaining “All In” with OpenStack

As I mentioned a few weeks ago, it’s critical that we stay committed to the cause of OpenStack and its objective.

There are four key areas of focus for OpenStack, that I hope to advocate, if elected to the board.

  1. OpenStack adoption within the enterprise worldwide.  I am in the camp that very much believes in the private cloud (as well as public cloud), and that the open source and vendor communities need to put more effort and resources into ensuring OpenStack is the optimal private cloud out there, across all industries / geographies / etc.
  2. Designing and positioning OpenStack to address tangible business challenges.  The enterprise customer is not seeking a new technology – they looking for things like ways to make IT management more self service, a means to drive on-demand scalability of infrastructure and PaaS, and a way to operate workloads on-premise, AS WELL AS off-premise.
  3. Addressing the cultural IT changes that need to occur.  As cloud continues to permeate the enterprise IT organization, we need to deliver the right training and certifications to enable existing IT experts to transition to this new means of IT service.  If we can ensure these valuable people have a place in the new archetype, they will be our advocates as well.
  4. Championing the OpenStack operator.  The reality of cloud is not just in the using, but in the operating.  There is a strong contingent of operators within our community, and their role is critical to our success – we need to continue to enable this important function.

I’ve been fortunate to be a part of a number of technology movements in my career, just as they started to make the turn from innovative idea to consistent, reliable IT necessity. And this is why I continue to be excited about the prospect of OpenStack – I’m seeing growth with more customers, more use cases, more production implementations.

And, while there are may be detractors out there, coining catchy and nonsensical “as-a-Service” buzzwords, my position on OpenStack should sound familiar – because it hasn’t changed since Day One.

“All in.”

And, if given the opportunity, I hope to partner with you to get the rest of the world “all in” on OpenStack as well.

Until next time,

JOSEPH
@jbgeorge

Joseph George is the Vice President of Solutions Strategy at SUSE, and is a candidate for OpenStack Board of Directors.  OpenStack Elections take place on the week of January 9, 2017.
Click here to learn more.

 

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

HPC: Enabling Fraud Detection, Keeping Drivers Safe, Helping Cure Disease, and So Much More

November 14, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Ask kids what they want to be when they grow up, and you will often hear aspirations like:

  • “I want to be an astronaut, and go to outer space!”
  • “I want to be a policeman / policewoman, and keep people safe!”
  • “I want to be a doctor – I could find a cure for cancer!”
  • “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
  • “I want to be an engineer in a high performance computing department!”

OK, that last one rarely comes up.

Actually, I’ve NEVER heard it come up.

But here’s the irony of it all…

That last item is often a major enabler for all the other items.

Surprised?  A number of people are.

For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.

The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.

medicalHPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.

Here are a few examples:

  • Do you enjoy driving a car or other vehicle?  Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver.  Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
  • Have you fueled your car with gas recently?  Thank an HPC department at the energy / oil & gas company.  While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around.  Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort.  With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
  • Have you been treated for illnesses with medical innovation?  Thank an HPC department at the pharmaceutical firm and associated research hospitals.  Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals.  In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
  • Have you ever been proactively contacted by your bank on suspected fraud on your account?  Thank an HPC department at the financial institution.  Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks.  The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.

And the list goes on and on.

  • Security enablementspace
  • Banking risk analysis
  • Space exploration
  • Aircraft safety
  • Forecasting natural disasters
  • So much more…

If it’s a tough problem to solve, more likely than not, HPC is involved.

And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.

HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade.  Open source is a major staple of this group of users, especially with deep adoption of Linux.

You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”)  I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.

Business CommunicationHPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model.  Here are a few examples:

This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more.  From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around.  (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)

High performance computing: take a second look – it may be much more than you originally thought.

And maybe it can help revolutionize YOUR industry.

Until next time,

JOSEPH
@jbgeorge

Highlights from OpenStack Summit Barcelona

October 31, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

What a great week at the OpenStack Summit this past week in Barcelona! Fantastic keynotes, great sessions, and excellent hallway conversations.  It was great to meet a number of new Stackers as well as rekindle old friendships from back when OpenStack kicked off in 2010.

A few items of note from my perspective:

OpenStack Foundation Board of Directors Meeting

OpenStack Board In Session

As I mentioned in my last blog, it is the right of every OpenStack member to attend / listen in on each board meeting that the OpenStack Foundation Board of Directors holds.  I made sure to head out on Monday and attend most of the day.  There was a packed agenda so here a few highlights:

  • Interesting discussion around the User Committee project that board member Edgar Magana is working toward, with discussion on its composition, whether members should be elected, and if bylaw changes are warranted.  It was a deep topic that required further time, so the topic was deferred to a later discussion with work to be done to map out the details. This is an important endeavor for the community in my opinion – I will be keeping an eye on how this progresses.
  • A number of strong presentations by prospective gold members were delivered as they made their cases to be added to that tier. I was especially happy to see a number of Chinese companies presenting and making their case.  China is a fantastic growth opportunity for the OpenStack projecct, and it was encouraging to see players in that market discuss all they are doing for OpenStack in the region.  Ultimately, we saw City Network, Deutsche Telekom, 99Cloud and China Mobile all get voted in as Gold members.
  • Lauren Sell (VP of Marketing for the Foundation) spoke on a visionary model to where her team is investigating how our community can engage with other projects in terms of user events and conferences.  Kubernetes, Ceph, and other projects were named as examples.  This is a great indicator of how we’ve evolved, as it highlights that often multiple projects are needed to address actual business challenges.  A strong indicator of maturity for the community.

Two Major SUSE OpenStack Announcements

SUSE advancements in enterprise ready OpenStack made its way to the Summit in a big way this week.

  1. SUSE OpenStack Cloud 7:  While we are very proud to be one of the first vendors to provide an enterprise-grade Newton based OpenStack distribution, this release also offers features like new Container-as-a-Service capabilities and non-disruptive upgrade capabilities.

    Wait, non-disruptive upgrade?  As in, no downtime?  And no service interruptions?

    That’s right – disruption to service is a big no-no in the enterprise IT world, and now SUSE OpenStack Cloud 7 provides you the direct ability to stay live during OpenStack upgrade.

  2. Even more reason to become a COA.  All the buzz around the Foundation’s “Certified OpenStack Administrator” exam got even better this week when SUSE announced that the exam would now feature the SUSE platform as an option.

    And BIG bonus win – if you pass the COA using the SUSE platform, you will be granted

    1. the Foundation’s COA certification
    2. SUSE Certified Administrator in OpenStack Cloud certificatio

That’s two certifications with one exam.  (Be sure to specify the SUSE platform when taking the exam to take advantage of this option.)

There’s much more to these critical announcements so take a deeper look into them with these blogs by Pete Chadwick and Mark Smith.  Worth a read.

 Further Enabling the Enterprise

As you know, enterprise adoption of OpenStack is a major passion of mine – I’ve captured a couple more signs I saw this week of OpenStack continuing to head in the right direction.

  • Progress in Security. On stage this week, OpenStack was awarded the CII, the Core Infrastructure Initiative Best Practices badge.  The CII is a project out of the Linux Foundation project that validates open source projects, specific for security, quality and stability. By winning this award, OpenStack is now validated by a trusted third party and is 100% compliant.  Security FTW!
  • Workload-based Sample Configs.  This stable of assets has been building for some time, but OpenStack.org now boasts a number of reference architectures addressing some of the most critical workloads.  From web apps to HPC to video processing and more, there are great resources on how to get optimize OpenStack for these workloads.  (Being a big data fan, I was particularly happy with the big data resources here.)

I’d be interested in hearing what you saw as highlights as well – feel free to leave your thoughts in the comments section.

OK, time to get home, get rested – and do this all over again in 6 months in Boston.

(If you missed the event this time, OpenStack.org has you covered – start here to check out a number of videos from the event, and go from there.)

Until next time,

JOSEPH
@jbgeorge

 

Boston in 2017!

This Week: OpenStack Summit Barcelona!

October 26, 2016 Leave a comment

ossummit-barcelonaThis is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

In a few days, Stackers will congregate in beautiful Barcelona, Spain to kick off the bi-annual OpenStack Summit and User Conference, the 14th of its kind.

On the heels of the recent OpenStack Newton launch, we will see a wide variety of people, backgrounds, industries, and skill sets represented, all focused on learning about, sharing best practices on, and working on the future of OpenStack.

There are many great sessions, workshops, and evening events happening at the summit this coming week, but three in particular that I want to highlight.

OpenStack Board of Directors Meeting

Did you know that, since OpenStack is an open community, the OpenStack Foundation board meeting is open for members to attend and listen in to the discussion? It’s great for members to have this level of access , so take advantage of the openness built into the OpenStack community, and take a listen.

While there are some portions of the meeting that will be a closed session (rightly so), most of the meeting you’ll hear about progress in specific initiatives, comments on new members to the community, and hear back on future directions.

It’s a great experience that more of our members need to participate in, so I highly recommend it to members. You can check out the planned agenda and WebEx details here.

 

OpenStack Ops Meetup

I mentioned the OpenStack operator community in my last blog (“Renewing Focus on Bringing OpenStack to the Masses”), and how I feel strongly about championing the cause of the operators out there.

While many of us are focused on code design, quality, new projects, etc, the operators are tasked with implementing Openstack.  This involves the day-to-day effort of running OpenStack clouds, which include readying IT environments for OpenStack deployments, first hand implementation of the project, the ongoing maintenance and upgrade aspects of the cluster, and being driven by a specific business goal they will be measured by.

At this Summit, the Operators will be hosting an Ops Meetup to get into the meat of OpenStack Ops. Now this stands to be an intense, down-in-the weeds discussion – not for the faint of heart! 🙂  So if you are among the many tasked with getting OpenStack operational in your environment, head on over and get to know your peers in this space, swap stories of what works well, share best practices, and make connections you can stay in touch with for years to come.

Learn more about the Ops Meetup here.

Certified OpenStack Administrator (COA)

Are you aware that you can now be CERTIFIED as an OpenStack Admin?

The COA is an exam you can take to prove your ability to solve problems using both command line and graphical interfaces OpenStack, demonstrating that you have mastered a number of skills required to operate the solution.

At OpenStack Summit, there are a few COA activities occurring that you should be aware of:

  • COA 101. Anne Bertucio and Heidi Bretz of the OpenStack Foundation will be hosting a 30min beginner-level session on the topic of COA, touching on the why / what / how relating to the COA exam.  (More info here.)
  • COA booth. The Foundation Lounge at the Summit will feature an area dedicated to learning more about the COA.  A variety of OpenStack community volunteers will be pitching in to answer questions, find trainings, and even sign up for the COA.  I plan on helping out on Wednesday right after the morning keynotes, so stop by and let’s chat COA.
  • COA exams.  If you’re ready now to take the exam, head on over to https://www.openstack.org/coa/ and get the details on when you can take the exam.  The world needs Certified OpenStack Admins!

(PS – If you need help with some prep, SUSE’s happy to help there – click here to get details on COA training from SUSE.)

I’m looking forward to a great week of re-connecting with a number of you I haven’t seen in some time, so if you’re out at Barcelona, look me up – I’d love to hear what OpenStack project you’re working on, or learn about how you are implementing OpenStack, or where I can help with places OpenStack can further help you in your business objectives.

See you in Barcelona!

JOSEPH
@jbgeorge

Renewing Focus on Bringing OpenStack to the Masses

October 17, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Happy Newton Release Week!

This 14th release of OpenStack is one we’ve all been anticipating, with its focus on scalability, resiliency, and overall user experience – important aspects that really matter to enterprise IT organizations, and that help with broader adoption of OpenStack with those users.

(More on that later.)

Six years of Community, Collaboration, and Growth

I was fortunate enough to have been at the very first OpenStack “meeting of the minds” at the Omni hotel in Austin, back in 2010, when just the IDEA of OpenStack was in preliminary discussions. Just a room full of regular everyday people, across numerous industries, who saw the need for a radical new open source cloud platform, and had a passion to make something happen.

And over the years, we’ve seen progress: the creation of the OpenStack Foundation, thousands of contributors to the project, scores of companies throwing their support and resources behind OpenStack, the first steps beyond North America to Europe and Asia (especially with all the OpenStack excitement in India, China, and Japan), numerous customers adopting our revolutionary cloud project, and on and on.

Power to the People

But this project is also about our people.

And we, as a community of individuals, have grown and evolved over the years as well – as have the developers, customers and end users we hope to serve with the project. What started out with a few visionary souls has now blossomed into a community of 62,000+ members from 629 supporting companies across 186 countries.

As a member of the community, I’ve seen positive growth in my own life since then as well.  I’ve been fortunate to have been part of some great companies – like Dell, HPE, and now SUSE.  And I’ve been able to help enterprise customers solve real problems with impactful open source solutions in big data, storage, HPC, and of course, cloud with OpenStack.

And there might have been one other minor area of self-transformation since those days as well, as the graphic illustrates… 🙂

JBG_BeforeAfter

Clearly we – the OpenStack project and the OpenStack people – are evolving for the better.

 

So Where Do We Go From Here?

In 2013, I was able to serve the community by being a Director on the OpenStack Foundation Board. Granted, things were still fairly new – new project ideas emerging, new companies wanting to sponsor, developers being added by the day, etc – but there was a personal focus I wanted to drive among our community.

“Bringing OpenStack to the masses.”

And today, my hope for the community remains the same.

While we celebrate all the progress we have made, here are some of my thoughts on what we, as a community, should continue to focus on to bring OpenStack to the masses.

Adoption with the Enterprise by Speaking their Language

Take a look at the most recent OpenStack User Survey. It provides a great snapshot into the areas that users need help.

  • “OpenStack is great to recommend, however there’s a fair amount of complexity that needs to be tackled if one wishes to use it.”
  • “OpenStack lacks far too many core components for anything other than very specialized deployments.”
  • “Technology is good, but no synergies between the sub-projects.”

2016 data suggests that enterprise customers are looking for these sorts of issues to be addressed, as well as security and management practices keeping pace with new features. And, with all the very visible security breaches in recent months, the enterprise is looking for open source projects to put more emphasis on security.  In fact, many of the customers I engage with love the idea of OpenStack, but still need help with these fundamental requirements they have to deal with.

Designing / Positioning OpenStack to Address Business Challenges

Have you ever wondered why so many tall office buildings have revolving doors? Isn’t a regular door simpler, less complex, and easier to use?  Why do we see so many revolving doors, when access can be achieved so much simpler with other means?

Because revolving doors don’t exist to solve access problems – they solve a heated-air loss problem.

When a door in a tall building is opened, cold air from outside forces it’s way inside when doors are open, pushing warmer / lighter air up – which is then lost through vents at the top of the building. Revolving doors limit that loss incredibly by sealing off portions of outside access when rotating.

Think of our project in the same way.OpenStackIndustries

  • What business challenges can be addressed today / near-term by implementing an OpenStack-based solution?
  • Beyond the customer set looking to build a hosted cloud to resell, what further applications can OpenStack be applied to?
  • How can OpenStack provide an industry-specific competitive advantage to the financial sector?  To healthcare?  To HPC?  To the energy sector?  How about retail or media?

Address the Cultural IT Changes that Need to Happen

I recently read a piece where Jonathan Bryce spoke to the “cultural IT changes that need to occur” – and I love that line of thinking.

Jonathan specifically said “What you do with Windows and Linux administrators is the bigger challenge for a lot of companies. Once you prove the technology, you need policies and training to push people.”

That is spot on.

What we are all working on with OpenStack will fundamentally shift how IT organizations will operate in the future. Let’s take the extra step, and provide guidance to our audiences on how they can evolve and adapt to the coming changes with training, tools, community support, and collaboration. A good example of this is the Certified OpenStack Administrator certification being offered by the Foundation, and training for the COA offered by the OpenStack partner ecosystem.

Further Champion the OpenStack Operator

Operators are on the front lines of implementing OpenStack, and “making things work.” There is no truer test of the validity of our OpenStack efforts than when more and more operators can easily and simply deploy / run / maintain OpenStack instances.

I am encouraged by the increased focus on documentation, connecting developers and operators more, and the growth of a community of operators sharing stories on what works best in practical implementation of OpenStack. And we will see this grow even more at the Operator Summit in Barcelona at OpenStack Summit (details here).

We are making progress here but there’s so much more we can do to better enable a key part of our OpenStack family – the operators.

The Future is Bright

Since we’re celebrating the Newton release, the quote from Isaac Newton on looking ahead is seems fitting…

“To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.”

When it comes to where OpenStack is heading, I’m greatly optimistic.  As it has always been, it will not be easy.  But we are making a difference.

And with continued and increased focus on enterprise adoption, addressing business challenges, aiding in the cultural IT change, and an increased focus on the operator, we can go to the next level.

We can bring OpenStack to the masses.

See you in Barcelona in a few weeks.

Until next time,

JOSEPH
@jbgeorge

 

Living Large: The Challenge of Storing Video, Graphics, and other “LARGE Data”

August 25, 2016 Leave a comment

.

UPDATE SEP 2, 2016: SUSE has released a brand new customer case study on OrchardPark_SUSEEnterpriseStorage“large data” featuring New York’s Orchard Park Police Department, focused on video storage.  Read the full story here!

————–

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Everyone’s talking about “big data” – I’m even hearing about “small data” – but those of you who deal in video, audio, and graphics are in the throes of a new challenge:  large data.

Big Data vs Large Data

It’s 2016 – we’ve all heard about big data in some capacity – generally speaking, it is truckloads of data points from various sources, magnanimous in its volume (LOTS of individual pieces of data), its velocity (the speed at which that data is generated), and its variety (both structured and unstructured data types).  Open source projects like Hadoop have been enabling a generation of analytics work on big data.

So what in the world am I referring to when I say “large data?”

For comparison, while “big data” is a significant number of individual data that is of “normal” size, I’m defining “large data” as an individual piece of data that is massive in its size. Large data, generally, is not required to have real-time or fast access, and is often unstructured in its form (ie. doesn’t conform to the parameters of relational databases).

Some examples of large data:

LargeDataChart1

 

Why Traditional Storage Has Trouble with Large Data

So why not just throw this into our legacy storage appliances?

Traditional storage solutions (much of what is in most datacenters today) is great at handling standard data.  Meaning, data that is:

  • average / normal in individual data size
  • structured and fits nicely in a relational database
  • when totaled up, doesn’t exceed ~400TB or so in total space

Unfortunately, none of this works for large data.  Large data, due to its size, can consume traditional storage appliances VERY rapidly.  And, since traditional storage was developed when data was thought of in smaller terms (megabytes and gigabytes), large data on traditional storage can bring about performance / SLA impacts.

So when it comes to large data, traditional storage ends up being consumed too rapidly, forcing us to consider adding expensive traditional storage appliances to accommodate.

“Overall cost, performance concerns, complexity and inability to support innovation are the top four frustrations with current storage systems.” – SUSE, Software Defined Storage Research Findings (Aug 2016)

Object Storage Tames Large Data

Object storage, as a technology, is designed to handle storage of large, unstructured data from the ground up.  And since it was built on scalable cloud principles, it can scale to terabytes, petabytes, exabytes, and theoretically beyond.

When you introduce open source to the equation of object storage software, the economics of the whole solution become even better. And since scale-out, open source object storage is essentially software running on commodity servers with local drives, the object storage should scale without issue – as more capacity is needed, you just add more servers.

When it comes to large data – data that is unstructured and individually large, such as video, audio, and graphics – SUSE Enterprise Storage provides the open, scalable, cost-effective, and performant storage experience you need.

SUSE Enterprise Storage – Using Object Storage to Tame Large Data

Here’s how:

  • It is designed from the ground-up to tackle large data.  The Ceph project, which is core of SUSE Enterprise Storage, is built on a foundation of RADOS (Reliable Autonomic Distributed Object Store), and leverages the CRUSH algorithm to scale data across whatever size cluster you have available, without performance hits.
  • It provides a frequent and rapid innovation pace. It is 100% open source, which means you have the power of the Ceph community to drive innovation, like erasure coding.  SUSE gives these advantages to their customers by providing a full updated release every six months, while other Ceph vendors give customers large data features only as part of a once-a-year release.
  • It offers pricing that works for large data. Many object storage vendors, both commercial and open source, choose to charge you based on how much data you store.  The price to store 50TB of large data is different than the price to store 100TB of large data, and there is a different price if you want to store 400TB of large data – even if all that data stays on one server!  SUSE chooses to provide their customers “per node” pricing – you pay subscription only as servers are added.  And when you use storage dense servers, like the HPE Apollo 4000 storage servers, you get tremendous value.

 

Why wait?  It’s time to kick off a conversation with your SUSE rep on how SUSE Enterprise Storage can help you with your large data storage needs.  You can also click here to learn more.

Until next time,

JOSEPH

@jbgeorge

Address Your Company’s Data Explosion with Storage That Scales

June 27, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.

How about some context?

Data-GrainsofSand

Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.

 

The Data Explosion Happening in YOUR Industry

Some interesting factoids:

  • An automated manufacturing facility can generate many terabytes of data in a single hour.
  • In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
  • Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
  • In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
  • Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.

The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.

And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick:  PETABYTES OF DATA.

Break the Status Quo!

 

Status Quo Doesn’t Cut It

I know what you’re thinking:  “What’s the problem? I‘ve had a storage solution in place for years.  It should be fine.”

Not quite.

  1. You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
  2. The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
  3. The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.

The good news is that there is a way to store data at this scale with better performance at a much better price point.

 

Open Source Scale Out Storage

Why is this route better?

  • It was designed from the ground up for scale.
    Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant.  (Cloud environments leverage a very similar model for computing.)
  • It’s cost is primarily commodity servers with hard drives and software.
    Traditional storage solutions are expensive to scale in capacity or performance.  Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale.
  • When you go open source, the software benefits get even better.
    Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.

 

Ready.  Set.  Go.

At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.

It delivers what we’ve talked about: open source scale out storage.  It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.

And with SUSE behind you, you’ll get full services and support to any level you need.

 

OK, enough talk – it’s time for you to get started.

And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage.  Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.)  It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.

Until next time,

JOSEPH
@jbgeorge