Archive
OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation
.
This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
It’s a transformative time for OpenStack.
And I know a thing or two about transformations.
Over the last two and a half years, I’ve managed to lose over 90 pounds.
(Yes, you read that right.)
It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.
Lessons I’ve Learned
When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.
With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.
While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.
1. Clarity on the Goal and the Motivation
It’s a very familiar story for many people. Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”
The year I was to turn 40, I realized that I was running out of time to make good on my word!
And there it was – my goal and my motivation.
So let’s turn to OpenStack – what is our goal and motivation as a project?
According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”
That’s our goal and motivation
- meet the needs of public and private clouds
- no matter the size
- simple to deploy
- very scalable
- open across all parameters
While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.
2. Staying Focused During the “Middle” of the Journey
When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.
After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before. Oftentimes, I was congratulated – as if I had reached my destination.
But I had not reached my destination.
While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal. And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.
OpenStack has come a long way in its fourteen releases:
The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
- The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
- Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.
We’ve come a long way. But there is more to go to achieve our ultimate goal. Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.
We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.
3. Constantly Learning and Adapting
While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.
This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.
Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.
Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.
I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.
We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users. OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.
This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.
Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.
Room for Optimism
I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of. But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.
And it is a mission that we can certainly accomplish. I believe in our vision, our project, and our community.
And take it from me – reaching BIG milestones are very, very rewarding.
Until next time,
JOSEPH
@jbgeorge
HPC: Enabling Fraud Detection, Keeping Drivers Safe, Helping Cure Disease, and So Much More
.
This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
Ask kids what they want to be when they grow up, and you will often hear aspirations like:
- “I want to be an astronaut, and go to outer space!”
- “I want to be a policeman / policewoman, and keep people safe!”
- “I want to be a doctor – I could find a cure for cancer!”
- “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
- “I want to be an engineer in a high performance computing department!”
OK, that last one rarely comes up.
Actually, I’ve NEVER heard it come up.
But here’s the irony of it all…
That last item is often a major enabler for all the other items.
Surprised? A number of people are.
For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.
The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.
HPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.
Here are a few examples:
- Do you enjoy driving a car or other vehicle? Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver. Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
- Have you fueled your car with gas recently? Thank an HPC department at the energy / oil & gas company. While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around. Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort. With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
- Have you been treated for illnesses with medical innovation? Thank an HPC department at the pharmaceutical firm and associated research hospitals. Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals. In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
- Have you ever been proactively contacted by your bank on suspected fraud on your account? Thank an HPC department at the financial institution. Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks. The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.
And the list goes on and on.
- Security enablement
- Banking risk analysis
- Space exploration
- Aircraft safety
- Forecasting natural disasters
- So much more…
If it’s a tough problem to solve, more likely than not, HPC is involved.
And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.
HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade. Open source is a major staple of this group of users, especially with deep adoption of Linux.
You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”) I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.
HPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model. Here are a few examples:
- The National Strategic Computing Initiative
- The Cancer Moonshot Program
- The Exascale Computing Project
- The STEM Program for Students (Science, Technology, Engineering, Math)
This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more. From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around. (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)
High performance computing: take a second look – it may be much more than you originally thought.
And maybe it can help revolutionize YOUR industry.
Until next time,
JOSEPH
@jbgeorge
Living Large: The Challenge of Storing Video, Graphics, and other “LARGE Data”
.
UPDATE SEP 2, 2016: SUSE has released a brand new customer case study on “large data” featuring New York’s Orchard Park Police Department, focused on video storage. Read the full story here!
————–
This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
Everyone’s talking about “big data” – I’m even hearing about “small data” – but those of you who deal in video, audio, and graphics are in the throes of a new challenge: large data.
Big Data vs Large Data
It’s 2016 – we’ve all heard about big data in some capacity – generally speaking, it is truckloads of data points from various sources, magnanimous in its volume (LOTS of individual pieces of data), its velocity (the speed at which that data is generated), and its variety (both structured and unstructured data types). Open source projects like Hadoop have been enabling a generation of analytics work on big data.
So what in the world am I referring to when I say “large data?”
For comparison, while “big data” is a significant number of individual data that is of “normal” size, I’m defining “large data” as an individual piece of data that is massive in its size. Large data, generally, is not required to have real-time or fast access, and is often unstructured in its form (ie. doesn’t conform to the parameters of relational databases).
Some examples of large data:
Why Traditional Storage Has Trouble with Large Data
So why not just throw this into our legacy storage appliances?
Traditional storage solutions (much of what is in most datacenters today) is great at handling standard data. Meaning, data that is:
- average / normal in individual data size
- structured and fits nicely in a relational database
- when totaled up, doesn’t exceed ~400TB or so in total space
Unfortunately, none of this works for large data. Large data, due to its size, can consume traditional storage appliances VERY rapidly. And, since traditional storage was developed when data was thought of in smaller terms (megabytes and gigabytes), large data on traditional storage can bring about performance / SLA impacts.
So when it comes to large data, traditional storage ends up being consumed too rapidly, forcing us to consider adding expensive traditional storage appliances to accommodate.
“Overall cost, performance concerns, complexity and inability to support innovation are the top four frustrations with current storage systems.” – SUSE, Software Defined Storage Research Findings (Aug 2016)
Object Storage Tames Large Data
Object storage, as a technology, is designed to handle storage of large, unstructured data from the ground up. And since it was built on scalable cloud principles, it can scale to terabytes, petabytes, exabytes, and theoretically beyond.
When you introduce open source to the equation of object storage software, the economics of the whole solution become even better. And since scale-out, open source object storage is essentially software running on commodity servers with local drives, the object storage should scale without issue – as more capacity is needed, you just add more servers.
When it comes to large data – data that is unstructured and individually large, such as video, audio, and graphics – SUSE Enterprise Storage provides the open, scalable, cost-effective, and performant storage experience you need.
SUSE Enterprise Storage – Using Object Storage to Tame Large Data
Here’s how:
- It is designed from the ground-up to tackle large data. The Ceph project, which is core of SUSE Enterprise Storage, is built on a foundation of RADOS (Reliable Autonomic Distributed Object Store), and leverages the CRUSH algorithm to scale data across whatever size cluster you have available, without performance hits.
- It provides a frequent and rapid innovation pace. It is 100% open source, which means you have the power of the Ceph community to drive innovation, like erasure coding. SUSE gives these advantages to their customers by providing a full updated release every six months, while other Ceph vendors give customers large data features only as part of a once-a-year release.
- It offers pricing that works for large data. Many object storage vendors, both commercial and open source, choose to charge you based on how much data you store. The price to store 50TB of large data is different than the price to store 100TB of large data, and there is a different price if you want to store 400TB of large data – even if all that data stays on one server! SUSE chooses to provide their customers “per node” pricing – you pay subscription only as servers are added. And when you use storage dense servers, like the HPE Apollo 4000 storage servers, you get tremendous value.
Why wait? It’s time to kick off a conversation with your SUSE rep on how SUSE Enterprise Storage can help you with your large data storage needs. You can also click here to learn more.
Until next time,
JOSEPH
Address Your Company’s Data Explosion with Storage That Scales
.
This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.
How about some context?
Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.
The Data Explosion Happening in YOUR Industry
Some interesting factoids:
- An automated manufacturing facility can generate many terabytes of data in a single hour.
- In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
- Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
- In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
- Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.
The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.
And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick: PETABYTES OF DATA.
Status Quo Doesn’t Cut It
I know what you’re thinking: “What’s the problem? I‘ve had a storage solution in place for years. It should be fine.”
Not quite.
- You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
- The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
- The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.
The good news is that there is a way to store data at this scale with better performance at a much better price point.
Open Source Scale Out Storage
Why is this route better?
- It was designed from the ground up for scale.
Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant. (Cloud environments leverage a very similar model for computing.) - It’s cost is primarily commodity servers with hard drives and software.
Traditional storage solutions are expensive to scale in capacity or performance. Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale. - When you go open source, the software benefits get even better.
Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.
Ready. Set. Go.
At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.
It delivers what we’ve talked about: open source scale out storage. It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.
And with SUSE behind you, you’ll get full services and support to any level you need.
OK, enough talk – it’s time for you to get started.
And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage. Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.) It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.
Until next time,
JOSEPH
@jbgeorge