Archive

Posts Tagged ‘open source’

Is Your HPC System Fully Optimized? + Live Chat

June 11, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

optimization-process

Let’s talk operating systems…

Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.)

You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power.

You’re probably looking at a high-performance file system to ensure that your gigantic swaths of data can be accessed, processed, and stored so your outcomes tell the full story.

You’re most likely using a network interconnect that can really move so that your system can hum, connecting data and processing in a seamless manner, with limited performance hit.

We can go on and on, in terms of drive types, memory choices, and even cabling decisions. The more optimized your system can be for a high-performance workload, the faster and better your results will be.

But what about your software stack, starting with your operating system?

Live tweet chat

Join us for a live online chat at 10 a.m. Tuesday, June 19 — “Do you need an HPC-optimized OS?” — to learn about optimizing your HPC workloads with an improved OS. You can participate using a Twitter, LinkedIn or Facebook account and the hashtag #GetHPCOS.

Sunny Sundstrom and I will discuss what tools to use to customize the environment for your user base, whether there’s value in packaging things together, and what the underlying OS really means for you.

Bring your questions or offer your own expertise.

Moment of truth time

Are you optimizing your system at the operating environment level, like you are across the rest of your system? Are you squeezing every bit of performance benefit out of your system, even when it comes to the OS?

Think about it — if you’re searching for an additional area to drive a performance edge, the operating system could be just the place to look:

  • There are methods to allocate jobs to run on specific nodes tuned for specific workloads — driven at the OS level.
  • You can deploy a lighter-weight OS instance on your working compute node to avoid unnecessarily bringing down the performance of the job, while ensuring your service nodes can adequately run and manage your system.
  • You can decrease jitter of the overall system by using the OS as your control point.

All of this at the OS level.

Take a deeper look at your overall system, including the software stack, and specifically your operating system and operating environment.

You may be pleasantly surprised to find that you can drive even more performance in a seemingly unexpected place.

Until next time,

JBG
@jbgeorge

Add our 6/19 tweet chat to your calendar for further discussion.

Running for the OpenStack Board: “All In” on OpenStack

January 5, 2017 Leave a comment

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

In thinking about the OpenStack community, our approach to the project going forward, and the upcoming Board elections, I’m reminded of a specific hand of the poker game Texas Hold ‘Em I observed a few years back between two players.

As one particular hand began, both players had similar chip stacks, and were each dealt cards that were statistically favorable to win.

The hand played out like most other hands – the flop, the turn, the river, betting, calling, etc.  And as the game continued toward its conclusion, those of us observing the game could see that one player was playing with the statistically better cards, and presumably the win.

But then the second player made a bold move that turned everything on its head.

He went “all in.”

The “all in” move in poker is one that commits all of your chips to the pot, and often requires your opponent to make a decision for most or all of their chips. It is an aggressive move in this scenario.

After taking some time to consider his options, the first player ultimately chose to fold his strong cards and cut his perceived losses, allowing the other player to claim the winnings.

And this prize can be claimed almost completely because of the “all in” strategy.

Clearly, going “all in” can be a very strong move indeed.

 

Decision Time in OpenStack

Next week – Monday, January 9 through Friday, January 13 – is an important week for the OpenStack community, as we elect the 2017 Individual Representatives to the Foundation’s Board of Directors.

I’m honored to have been nominated as a candidate for Board Director, to potentially serve the community again, as I did back in 2013.

Back in the summer of 2010, I was fortunate to be one of the few in the crowded ball room at the Omni Hotel in Austin, Texas, witnessing the birth of the OpenStack project. And it is amazing to see how far it has come – but with a tremendous amount of work yet to do.

allinOver the years, we’ve been fortunate to celebrate tremendous wins and market excitement. Other times, there were roadblocks to overcome.  And similar to the aforementioned poker game, we often had to analyze “the hand” we were dealt, “estimate the odds” of where cloud customers and the market was headed, and position ourselves to maximize chances for success – often trusting our instinct, when available data was incomplete at best.

And, as with many new projects that are in growth phase, our community was often put in a position to re-confirm our commitment to our mission. And our response was resounding and consistent on where we stood….

“All in.”

 

Remaining “All In” with OpenStack

As I mentioned a few weeks ago, it’s critical that we stay committed to the cause of OpenStack and its objective.

There are four key areas of focus for OpenStack, that I hope to advocate, if elected to the board.

  1. OpenStack adoption within the enterprise worldwide.  I am in the camp that very much believes in the private cloud (as well as public cloud), and that the open source and vendor communities need to put more effort and resources into ensuring OpenStack is the optimal private cloud out there, across all industries / geographies / etc.
  2. Designing and positioning OpenStack to address tangible business challenges.  The enterprise customer is not seeking a new technology – they looking for things like ways to make IT management more self service, a means to drive on-demand scalability of infrastructure and PaaS, and a way to operate workloads on-premise, AS WELL AS off-premise.
  3. Addressing the cultural IT changes that need to occur.  As cloud continues to permeate the enterprise IT organization, we need to deliver the right training and certifications to enable existing IT experts to transition to this new means of IT service.  If we can ensure these valuable people have a place in the new archetype, they will be our advocates as well.
  4. Championing the OpenStack operator.  The reality of cloud is not just in the using, but in the operating.  There is a strong contingent of operators within our community, and their role is critical to our success – we need to continue to enable this important function.

I’ve been fortunate to be a part of a number of technology movements in my career, just as they started to make the turn from innovative idea to consistent, reliable IT necessity. And this is why I continue to be excited about the prospect of OpenStack – I’m seeing growth with more customers, more use cases, more production implementations.

And, while there are may be detractors out there, coining catchy and nonsensical “as-a-Service” buzzwords, my position on OpenStack should sound familiar – because it hasn’t changed since Day One.

“All in.”

And, if given the opportunity, I hope to partner with you to get the rest of the world “all in” on OpenStack as well.

Until next time,

JOSEPH
@jbgeorge

Joseph George is the Vice President of Solutions Strategy at SUSE, and is a candidate for OpenStack Board of Directors.  OpenStack Elections take place on the week of January 9, 2017.
Click here to learn more.

 

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

HPC: Enabling Fraud Detection, Keeping Drivers Safe, Helping Cure Disease, and So Much More

November 14, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Ask kids what they want to be when they grow up, and you will often hear aspirations like:

  • “I want to be an astronaut, and go to outer space!”
  • “I want to be a policeman / policewoman, and keep people safe!”
  • “I want to be a doctor – I could find a cure for cancer!”
  • “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
  • “I want to be an engineer in a high performance computing department!”

OK, that last one rarely comes up.

Actually, I’ve NEVER heard it come up.

But here’s the irony of it all…

That last item is often a major enabler for all the other items.

Surprised?  A number of people are.

For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.

The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.

medicalHPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.

Here are a few examples:

  • Do you enjoy driving a car or other vehicle?  Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver.  Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
  • Have you fueled your car with gas recently?  Thank an HPC department at the energy / oil & gas company.  While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around.  Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort.  With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
  • Have you been treated for illnesses with medical innovation?  Thank an HPC department at the pharmaceutical firm and associated research hospitals.  Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals.  In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
  • Have you ever been proactively contacted by your bank on suspected fraud on your account?  Thank an HPC department at the financial institution.  Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks.  The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.

And the list goes on and on.

  • Security enablementspace
  • Banking risk analysis
  • Space exploration
  • Aircraft safety
  • Forecasting natural disasters
  • So much more…

If it’s a tough problem to solve, more likely than not, HPC is involved.

And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.

HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade.  Open source is a major staple of this group of users, especially with deep adoption of Linux.

You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”)  I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.

Business CommunicationHPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model.  Here are a few examples:

This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more.  From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around.  (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)

High performance computing: take a second look – it may be much more than you originally thought.

And maybe it can help revolutionize YOUR industry.

Until next time,

JOSEPH
@jbgeorge

Highlights from OpenStack Summit Barcelona

October 31, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

What a great week at the OpenStack Summit this past week in Barcelona! Fantastic keynotes, great sessions, and excellent hallway conversations.  It was great to meet a number of new Stackers as well as rekindle old friendships from back when OpenStack kicked off in 2010.

A few items of note from my perspective:

OpenStack Foundation Board of Directors Meeting

OpenStack Board In Session

As I mentioned in my last blog, it is the right of every OpenStack member to attend / listen in on each board meeting that the OpenStack Foundation Board of Directors holds.  I made sure to head out on Monday and attend most of the day.  There was a packed agenda so here a few highlights:

  • Interesting discussion around the User Committee project that board member Edgar Magana is working toward, with discussion on its composition, whether members should be elected, and if bylaw changes are warranted.  It was a deep topic that required further time, so the topic was deferred to a later discussion with work to be done to map out the details. This is an important endeavor for the community in my opinion – I will be keeping an eye on how this progresses.
  • A number of strong presentations by prospective gold members were delivered as they made their cases to be added to that tier. I was especially happy to see a number of Chinese companies presenting and making their case.  China is a fantastic growth opportunity for the OpenStack projecct, and it was encouraging to see players in that market discuss all they are doing for OpenStack in the region.  Ultimately, we saw City Network, Deutsche Telekom, 99Cloud and China Mobile all get voted in as Gold members.
  • Lauren Sell (VP of Marketing for the Foundation) spoke on a visionary model to where her team is investigating how our community can engage with other projects in terms of user events and conferences.  Kubernetes, Ceph, and other projects were named as examples.  This is a great indicator of how we’ve evolved, as it highlights that often multiple projects are needed to address actual business challenges.  A strong indicator of maturity for the community.

Two Major SUSE OpenStack Announcements

SUSE advancements in enterprise ready OpenStack made its way to the Summit in a big way this week.

  1. SUSE OpenStack Cloud 7:  While we are very proud to be one of the first vendors to provide an enterprise-grade Newton based OpenStack distribution, this release also offers features like new Container-as-a-Service capabilities and non-disruptive upgrade capabilities.

    Wait, non-disruptive upgrade?  As in, no downtime?  And no service interruptions?

    That’s right – disruption to service is a big no-no in the enterprise IT world, and now SUSE OpenStack Cloud 7 provides you the direct ability to stay live during OpenStack upgrade.

  2. Even more reason to become a COA.  All the buzz around the Foundation’s “Certified OpenStack Administrator” exam got even better this week when SUSE announced that the exam would now feature the SUSE platform as an option.

    And BIG bonus win – if you pass the COA using the SUSE platform, you will be granted

    1. the Foundation’s COA certification
    2. SUSE Certified Administrator in OpenStack Cloud certificatio

That’s two certifications with one exam.  (Be sure to specify the SUSE platform when taking the exam to take advantage of this option.)

There’s much more to these critical announcements so take a deeper look into them with these blogs by Pete Chadwick and Mark Smith.  Worth a read.

 Further Enabling the Enterprise

As you know, enterprise adoption of OpenStack is a major passion of mine – I’ve captured a couple more signs I saw this week of OpenStack continuing to head in the right direction.

  • Progress in Security. On stage this week, OpenStack was awarded the CII, the Core Infrastructure Initiative Best Practices badge.  The CII is a project out of the Linux Foundation project that validates open source projects, specific for security, quality and stability. By winning this award, OpenStack is now validated by a trusted third party and is 100% compliant.  Security FTW!
  • Workload-based Sample Configs.  This stable of assets has been building for some time, but OpenStack.org now boasts a number of reference architectures addressing some of the most critical workloads.  From web apps to HPC to video processing and more, there are great resources on how to get optimize OpenStack for these workloads.  (Being a big data fan, I was particularly happy with the big data resources here.)

I’d be interested in hearing what you saw as highlights as well – feel free to leave your thoughts in the comments section.

OK, time to get home, get rested – and do this all over again in 6 months in Boston.

(If you missed the event this time, OpenStack.org has you covered – start here to check out a number of videos from the event, and go from there.)

Until next time,

JOSEPH
@jbgeorge

 

Boston in 2017!

Renewing Focus on Bringing OpenStack to the Masses

October 17, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Happy Newton Release Week!

This 14th release of OpenStack is one we’ve all been anticipating, with its focus on scalability, resiliency, and overall user experience – important aspects that really matter to enterprise IT organizations, and that help with broader adoption of OpenStack with those users.

(More on that later.)

Six years of Community, Collaboration, and Growth

I was fortunate enough to have been at the very first OpenStack “meeting of the minds” at the Omni hotel in Austin, back in 2010, when just the IDEA of OpenStack was in preliminary discussions. Just a room full of regular everyday people, across numerous industries, who saw the need for a radical new open source cloud platform, and had a passion to make something happen.

And over the years, we’ve seen progress: the creation of the OpenStack Foundation, thousands of contributors to the project, scores of companies throwing their support and resources behind OpenStack, the first steps beyond North America to Europe and Asia (especially with all the OpenStack excitement in India, China, and Japan), numerous customers adopting our revolutionary cloud project, and on and on.

Power to the People

But this project is also about our people.

And we, as a community of individuals, have grown and evolved over the years as well – as have the developers, customers and end users we hope to serve with the project. What started out with a few visionary souls has now blossomed into a community of 62,000+ members from 629 supporting companies across 186 countries.

As a member of the community, I’ve seen positive growth in my own life since then as well.  I’ve been fortunate to have been part of some great companies – like Dell, HPE, and now SUSE.  And I’ve been able to help enterprise customers solve real problems with impactful open source solutions in big data, storage, HPC, and of course, cloud with OpenStack.

And there might have been one other minor area of self-transformation since those days as well, as the graphic illustrates… 🙂

JBG_BeforeAfter

Clearly we – the OpenStack project and the OpenStack people – are evolving for the better.

 

So Where Do We Go From Here?

In 2013, I was able to serve the community by being a Director on the OpenStack Foundation Board. Granted, things were still fairly new – new project ideas emerging, new companies wanting to sponsor, developers being added by the day, etc – but there was a personal focus I wanted to drive among our community.

“Bringing OpenStack to the masses.”

And today, my hope for the community remains the same.

While we celebrate all the progress we have made, here are some of my thoughts on what we, as a community, should continue to focus on to bring OpenStack to the masses.

Adoption with the Enterprise by Speaking their Language

Take a look at the most recent OpenStack User Survey. It provides a great snapshot into the areas that users need help.

  • “OpenStack is great to recommend, however there’s a fair amount of complexity that needs to be tackled if one wishes to use it.”
  • “OpenStack lacks far too many core components for anything other than very specialized deployments.”
  • “Technology is good, but no synergies between the sub-projects.”

2016 data suggests that enterprise customers are looking for these sorts of issues to be addressed, as well as security and management practices keeping pace with new features. And, with all the very visible security breaches in recent months, the enterprise is looking for open source projects to put more emphasis on security.  In fact, many of the customers I engage with love the idea of OpenStack, but still need help with these fundamental requirements they have to deal with.

Designing / Positioning OpenStack to Address Business Challenges

Have you ever wondered why so many tall office buildings have revolving doors? Isn’t a regular door simpler, less complex, and easier to use?  Why do we see so many revolving doors, when access can be achieved so much simpler with other means?

Because revolving doors don’t exist to solve access problems – they solve a heated-air loss problem.

When a door in a tall building is opened, cold air from outside forces it’s way inside when doors are open, pushing warmer / lighter air up – which is then lost through vents at the top of the building. Revolving doors limit that loss incredibly by sealing off portions of outside access when rotating.

Think of our project in the same way.OpenStackIndustries

  • What business challenges can be addressed today / near-term by implementing an OpenStack-based solution?
  • Beyond the customer set looking to build a hosted cloud to resell, what further applications can OpenStack be applied to?
  • How can OpenStack provide an industry-specific competitive advantage to the financial sector?  To healthcare?  To HPC?  To the energy sector?  How about retail or media?

Address the Cultural IT Changes that Need to Happen

I recently read a piece where Jonathan Bryce spoke to the “cultural IT changes that need to occur” – and I love that line of thinking.

Jonathan specifically said “What you do with Windows and Linux administrators is the bigger challenge for a lot of companies. Once you prove the technology, you need policies and training to push people.”

That is spot on.

What we are all working on with OpenStack will fundamentally shift how IT organizations will operate in the future. Let’s take the extra step, and provide guidance to our audiences on how they can evolve and adapt to the coming changes with training, tools, community support, and collaboration. A good example of this is the Certified OpenStack Administrator certification being offered by the Foundation, and training for the COA offered by the OpenStack partner ecosystem.

Further Champion the OpenStack Operator

Operators are on the front lines of implementing OpenStack, and “making things work.” There is no truer test of the validity of our OpenStack efforts than when more and more operators can easily and simply deploy / run / maintain OpenStack instances.

I am encouraged by the increased focus on documentation, connecting developers and operators more, and the growth of a community of operators sharing stories on what works best in practical implementation of OpenStack. And we will see this grow even more at the Operator Summit in Barcelona at OpenStack Summit (details here).

We are making progress here but there’s so much more we can do to better enable a key part of our OpenStack family – the operators.

The Future is Bright

Since we’re celebrating the Newton release, the quote from Isaac Newton on looking ahead is seems fitting…

“To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.”

When it comes to where OpenStack is heading, I’m greatly optimistic.  As it has always been, it will not be easy.  But we are making a difference.

And with continued and increased focus on enterprise adoption, addressing business challenges, aiding in the cultural IT change, and an increased focus on the operator, we can go to the next level.

We can bring OpenStack to the masses.

See you in Barcelona in a few weeks.

Until next time,

JOSEPH
@jbgeorge

 

The HP Big Data Reference Architecture: It’s Worth Taking a Closer Look…

January 27, 2015 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/The-HP-Big-Data-Reference-Architecture-It-s-Worth-Taking-a/ba-p/179502#.VMfTrrHnb4Z

I recently posted a blog on the value that purpose-built products and solutions bring to the table, specifically around the HP ProLiant SL4540 and how it really steps up your game when it comes to big data, object storage, and other server based storage instances.

Last month, at the Discover event in Barcelona, we announced the revolutionary HP Big Data Reference Architecture – a major step forward in how we, as a community of users, do Hadoop and big data – and it is a stellar example of how purpose-built solutions can revolutionize how you accelerate IT technology, like big data.   We’re proud that HP is leading the way in driving this new model of innovation, with the support and partnership of the leading voices in Hadoop today.

Here’s the quick version on what the HP Big Data Reference Architecture is all about:

Think about all the Hadoop clusters you’ve implemented in your environment – they could be pilot or production clusters, hosted by developer or business teams, and hosting a variety of applications.  If you’re following standard Hadoop guidance, each instance is most likely a set of general purpose server nodes with local storage.

For example, your IT group may be running a 10 node Hadoop pilot on servers with local drives, your marketing team may have a 25 node Hadoop production cluster monitoring social media on similar servers with local drives, and perhaps similar for the web team tracking logs, the support team tracking customer cases, and sales projecting pipeline – each with their own set of compute + local storage instances.

There’s nothing wrong with that set up – It’s the standard configuration that most people use.  And it works well.

However….

Just imagine if we made a few tweaks to that architecture.

  • What if we replaced the good-enough general purpose nodes, and replaced them with purpose-built nodes?
    • For compute, what if we used HP Moonshot, which is purpose-built for maximum compute density and  price performance?
    • For storage, what if we used HP ProLiant SL4540, which is purpose-built for dense storage capacity, able to get over 3PB of capacity in a single rack?
  • What if we took all the individual silos of storage, and aggregated them into a single volume using the purpose-built SL4540?  This way all the individual compute nodes would be pinging a single volume of storage.
  • And what if we ensured we were using some of the newer high speed Ethernet networking to interconnect the nodes?

Well, we did.

And the results are astounding.

While there is a very apparent cost benefit and easier management, there is a surprising bump in performance in terms of read and write. 

It was a surprise to us in the labs, but we have validated it in a variety of test cases.  It works, and it’s a big deal.

And Hadoop industry leaders agree.

“Apache Hadoop is evolving and it is important that the user and developer communities are included in how the IT infrastructure landscape is changing.  As the leader in driving innovation of the Hadoop platform across the industry, Cloudera is working with and across the technology industry to enable organizations to derive business value from all of their data.  We continue to extend our partnership with HP to provide our customers with an array of platform options for their enterprise data hub deployments.  Customers today can choose to run Cloudera on several HP solutions, including the ultra-dense HP Moonshot, purpose-built HP ProLiant SL4540, and work-horse HP Proliant DL servers.  Together, Cloudera and HP are collaborating on enabling customers to run Cloudera on the HP Big Data architecture, which will provide even more choice to organizations and allow them the flexibility to deploy an enterprise data hub on both traditional and newer infrastructure solutions.” – Tim Stevens, VP Business and Corporate Development, Cloudera

“We are pleased to work closely with HP to enable our joint customers’ journey towards their data lake with the HP Big Data Architecture. Through joint engineering with HP and our work within the Apache Hadoop community, HP customers will be able to take advantage of the latest innovations from the Hadoop community and the additional infrastructure flexibility and optimization of the HP Big Data Architecture.” – Mitch Ferguson, VP Corporate Business Development, Hortonworks

And this is just a sample of what HP is doing to think about “what’s next” when it comes to your IT architecture, Hadoop, and broader big data.  There’s more that we’re working on to make your IT run better, and to lead the communities to improved experience with data.

If you’re just now considering a Hadoop implementation or if you’re deep into your journey with Hadoop, you really need to check into this, so here’s what you can do:

  • my pal, Greg Battas posted on the new architecture and goes technically deep into it, so give his blog a read to learn more about the details.
  • Hortonworks has also weighed in with their own blog.

If you’d like to learn more, you can check out the new published reference architectures that follow this design featuring HP Moonshot and ProLiant SL4540:

If you’re looking for even more information, reach out to your HP rep and mention the HP Big Data Reference Architecture.  They can connect you with the right folks to have a deeper conversation on what’s new and innovative with HP, Hadoop, and big data. And, the fun is just getting started – stay tuned for more!

Until next time,

JOSEPH

@jbgeorge

Day 1: Big Data Innovation Summit 2014

April 10, 2014 Leave a comment

.

Hello from sunny, Santa Clara!BDIS Keynote Day 1

My team and I are here at the BIG DATA INNOVATION SUMMIT representing Dell (the company I work for), and it’s been a great day one.

I just wanted to take a few minutes to jot down some interesting ideas I heard today:

  • In Daniel Austin’s keynote, he addressed that the “Internet of things” should really be the “individual network of things” – highlighting that the number of devices, their connectivity, their availability, and their partitioning is what will be key in the future.
    .
  • One data point that also came out of Daniel’s talk – every person is predicted to generate 20 PETABYTES of data over the course of a lifetime!
    .
  • Juan Lavista of Bing hit on a number of key myths around big data:
    • the most important part of big data is its size
    • to do big data, all you need is Hadoop
    • with big data, theory is no longer needed
    • data scientists are always right 🙂

QUOTE OF THE DAY:  “Correlation does not yield causation.” – Juan Lavista (Bing)

  • Anthony Scriffignano was quick to admonish the audience that “it’s not just about data, it’s not just about the math…  [data] relationships matter.”
    .
  • The state of Utah state government is taking a very progressive view to areas that analytics can help drive efficiency in at that level – census data use, welfare system fraud, etc.  And it appears Utah is taking a leadership position in doing so.

I also had the privilege of moderating a panel on the topic of the convergence between HPC and the big data spaces, with representatives on the panel from Dell (Armando Acosta), Intel (Brent Gorda), and the Texas Advanced Computing Center (Niall Gaffney).  Some great discussion about the connections between the two, plus tech talk on the Lustre plug-in and the SLURM resource management project.

Additionally, Dell product strategists Sanjeet Singh and Joey Jablonski presented on a number of real user implementations of big data and analytics technologies – from university student retention projects to building a true centralized, enterprise data hub.  Extremely informative.

All in all, a great day one!

If you’re out here, stop by and visit us at the Dell booth.  We’ll be showcasing our hadoop and big data solutions, as well as some of the analytics capabilities we offer.

(We’ll also be giving away a Dell tablet on Thursday at 1:30, so be sure to get entered into the drawing early.)

Stay tuned, and I’ll drop another update tomorrow.

Until next time,

JOSEPH
@jbgeorge

Michael Dell Comments on the “Data Economy”

March 24, 2014 Leave a comment

This is a repost of my blog at  .

In this short interview with Inc., Michael Dell provides an overview of the company’s transformation into a leading player in the “data economy.”   

As Michael notes, with the costs of collecting data decreasing, more companies in a growing number of industries are making better use of existing data sources, and gathering data from new sources. 

And that’s where Dell has been enabling customers for years with solutions built with technologies like Hadoop and NoSql.  Helping companies and organizations make better use of this data, and assisting them in using it to solve their challenges, are just a few of the ways Dell has changed the Big Data conversation, and built an entirely new enterprise business along the way.

As a member of the Technology CEO Council, Michael also recently joined other tech CEOs to discuss the data economy with policy makers.  As an example of the potential of the data economy, he explained how Dell’s growing health information technology practice includes 7 billion medical images. These images are in an aggregated data set allowing researchers to mine them for patterns and predictive analytics.

“There’s lots that can be done with this data that was very, very siloed in the past,” Michael toldComputerworld, “We’re really just kind of scratching the surface.”

It’s certainly an exciting time to be at Dell – and the data revolution continues!

Read more…

It’s OpenStack Foundation Election Time!

December 5, 2013 Leave a comment
.
If you’re a member of the OpenStack community, you’re aware that the community is accepting nominations for the Foundation Board of Directors.
.
I was fortunate enough to be nominated by a couple of folks so I thought I’d post some of my details on my blog.
.
If you’re interested in getting me on the ballot (I need 10 nominations to do so), you can do so here: http://www.openstack.org/community/members/profile/1313
.
======
.

What is your relationship to OpenStack, and why is its success important to you? What would you say is your biggest contribution to OpenStack’s success to date?

In addition to serving on the OpenStack Foundation Board of Directors as a Gold Member Director from Dell, I was also the product manager that brought Dell’s first hardware + software + services OpenStack solution to market in July of 2011.  Now leading a team of professionals on OpenStack, we have brought multiple OpenStack solutions to market, including enabling Hyper-V with a market solution.
.

I believe OpenStack represents a trend that service providers and enterprise IT are making to deeper community collaboration on new technologies and practices, and I will continue to drive the initiative to make my customers and the community successful in a very real-world meaningful way.

.

Describe your experience with other non profits or serving as a board member. How does your experience prepare you for the role of a board member?

I have been active in a number of community and local church capacities that have enabled me to serve as a board member.  That, in addition to my past year as an OpenStack board member, has provided me a pragmatic view of how to grow a community from both a technical and a business perspective.
.
.

What do you see as the Board’s role in OpenStack’s success?

The Board should be the guardian of the business functions of the OpenStack community, as well as strategists as to where OpenStack CAN go and SHOULD go – to further attract more developers, more implementers, and more users.
.
.

What do you think the top priority of the Board should be in 2014?

I see this as three main priorities, though not exclusive of other priorities:
.

1.  Clarify the definition of OpenStack – what is core, what is compliant, and what is not.

2.  Understand where the strategic opportunities lie for OpenStack as a technology, and clear the path to ensure OpenStack gets there.

3.  Fully enable any and every new entrant to OpenStack in a real way – developers, implementers, and users – with the right level of documentation, tools, community support, and vendor support.

.

Thanks, and appreciate your nomination to represent the OpenStack Foundation in 2014!

Until next time,

JOSEPH
@jbgeorge