This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.
Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.
How about some context?
Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.
The Data Explosion Happening in YOUR Industry
Some interesting factoids:
- An automated manufacturing facility can generate many terabytes of data in a single hour.
- In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
- Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
- In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
- Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.
The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.
And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick: PETABYTES OF DATA.
Status Quo Doesn’t Cut It
I know what you’re thinking: “What’s the problem? I‘ve had a storage solution in place for years. It should be fine.”
- You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
- The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
- The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.
The good news is that there is a way to store data at this scale with better performance at a much better price point.
Open Source Scale Out Storage
Why is this route better?
- It was designed from the ground up for scale.
Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant. (Cloud environments leverage a very similar model for computing.)
- It’s cost is primarily commodity servers with hard drives and software.
Traditional storage solutions are expensive to scale in capacity or performance. Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale.
- When you go open source, the software benefits get even better.
Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.
Ready. Set. Go.
At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.
It delivers what we’ve talked about: open source scale out storage. It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.
And with SUSE behind you, you’ll get full services and support to any level you need.
OK, enough talk – it’s time for you to get started.
And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage. Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.) It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.
Until next time,
Wanted to let the Austin cloud enthusiasts, professionals, and fans know that Dell (the company that I work for) will be hosting a couple of user group gatherings this week…
The Austin OpenStack Meetup
This meetup has been a staple of the Austin OpenStack community, with Dell having spearheaded its start in October of last year.
We’ve had a number of great companies join Dell in sponsoring this monthly meetup at Austin’s Tech Ranch, including Rackspace, Suse, Canonical, and even HP. 🙂
This month, we’ve got Puppet Labs joining Dell as a joint sponsor of the meetup. On the docket for discussion:
- Important topics, events, news, etc from the OpenStack Design Summit and Conference held in San Francisco the week of Apr 16
- Discussion on the recently announced OpenStack Foundation – we hope to have someone from the foundation development team present
- A review of DevStack as a community development platform
Should be loads of fun – come hungry and thirsty – loads of pizza and cokes. (BTW people, let’s at least TRY to make a dent in the salad this time.)
All the details you need to know are at www.meetup.com/OpenStack-Austin.
The Austin Cloud User Group
Dell has been a sponsor of this user group before, and a number of us attend regularly – we’re glad to be back to talk about some of the things going on with Dell’s public cloud. Specifically, our Dell cloud services team will be hosting and talking about the goings on at Dell in the cloud hosting space.
You’ll see Dell’s cloud evangelist, Stephen Spector, as he touches on
- Discussion and demos of Dell’s vCloud hosted offering
- Demos of processor intensive applicataions in a public cloud setting
- Demos of a few common applications running on Dell’s cloud
If you’ve ever seen Stephen speak, you know you’re in for a treat. For those who don’t know, Stephen is the former Community Manager for the OpenStack community, so we’re ecstatic to have him here at Dell!
Again, come hungry and thirsty – loads of pizza and cokes.
All the details you need to know are at www.meetup.com/AustinCloudUserGroup.
OK, that’s it – be sure to make it out to at least one of these meetups, and we’ll give you a shout out if make it to both. 🙂
Until next time,
This past week marked the end of a nearly three week journey by a few brave souls from Rackspace and Dell, as the two companies sponsored the team to travel to and from the OpenStack Summit in San Fran last week, making Stacker Stops along the way.
A team that included folks like Dell’s Andi Abes and Rackspace’s Wayne Walls, Jordon Rinke, Scott Simpson, and Glen Campbell, finally ended their tour this past Friday, pulling into their San Antonio home base.
The team had quite a lofty mission – make the drive from San Antonio to San Fran, spend the week at the summit, and drive back hitting key cities like Los Angeles, Boulder, Dallas, and Austin. As they drove, they’d code and blog. When they stopped, they spread the good word around the OpenStack open cloud.
(And I hear there was a bit of hijinks thrown in as well.)
We had the pleasure of hosting the RoadStackers when they stopped by the Dell campus in Austin – I had a chance to chat with the guys, so take a look at a few of the 90 second videos we put together…
And yeah – we had a little fun with it – enjoy!
Until next time,
- Scott Simpson and I chat about OpenStack Swift – and an upcoming movie?
- Jordan Rinke and I chat about OpenStack and Hyper-V support – and REM cycles on the trip
- Glen Campbell and I chat about his Top 3 important topics at the OpenStack summit – and an unscheduled extension to the RoadStack tour
- Wayne Walls and I chat about what he’s going to do as soon as he gets off the RoadStack RV
It amazes me how incredibly computer savvy both of my kids are now.
Maybe that shouldn’t surprise a dad with two kids less than five years old. It’s just that one moment they’re searching for dinosaurs on Google, then the next moment, they’re fighting each other for the last jelly bean.
Anyway, today my oldest asked me why the computer keyboard was all out of order. Why does it go Q-W-E-R-T-Y-… when it should go A-B-C-D-E-… I explained that there are certain letters that are used more often than others when you write / type words, and that this design was used by a lot of people over many years, and is the best and fastest way to type.
And then she asked, “Are you sure it’s the best?”
( Kids. Always with the backtalk.)
So we started out on the path to figure it out. We googled QWERTY, and a number of interesting texts came up discussing the matter. After a few minutes of reading, some healthy discussion, and a couple of shots of chocolate milk, here’s what we came up with.
The QWERTY model was actually developed quite a long time ago, in the late 1800s, and evolved over a number of attempts at an effficient keyboard, including one that started “A-B-C-D-E-…” The key here is that a few of the pioneers in this space happened to be tied into a company that put out one of the earliest “writing machines”, which embedded the QWERTY format into its model, and marketed the hell out of it. Next thing you know, its everywhere.
(We’ve seen that movie before.)
We also learned about a science known as “letter pair frequency” – for example the frequency of the consecutive letters “th” in the English language is 1.52%, while the frequency of the consecutive letters “ur” is 0.02%. This is a key part of defining how the letters were to be arranged. A lot of pretty smart people spent a lot of time figuring that out. God bless ’em.
So, it’s clear that QWERTY is a pretty good system, based on actual science. But our question here was “Is QWERTY the best?”
Turns out there are a number of issues with the QWERTY model – a lot of the most frequent letter combinations require the same finger to type it, one hand ends up typing more than the other in a lot of cases, etc, etc, etc.
One of the most famous contrarians to QWERTY was a gentleman by the name of Dvorak, inventor of the Dvorak Simplified Keyboard, seen below. It was apprently developed with a focus on letter pair frequencies and hand physiology. In fact, some Dvorak-ites claim less of a chance of carpal tunnel with this keyboard.
Some interesting facts we uncovered about the Dvorak keyboard:
- The home keys are made up of the vowels and the most used consanants.
- You can type ~400 of English’s most common words with just the Dvorak “home keys” (vs ~100 on QWERTY).
- In general, ~70% of typing occurs on the Dvorak home keys (vs ~30% on QWERTY)
- Dvorak attempts to make stroking motion go from the outside of keyboard toward the middle, based on the assumption that its easier to tap your fingers from pinky to pointer vs the other way around
- Some operating systems offer you the option to configure your keyboard in the Dvorak model
Interesting stuff, no?
At this point, knowing that our research was far from thorough, we figured that no matter how interesting or scientifically superior that Dvorak model may prove to be, QWERTY has engrained itself so deeply into our culture at this point that it’s difficult to see the mainstream world changing.
On computers, that is.
It will be interesting to see how / if this becomes more of a discussion topic now that we’re heading toward smaller mobile devices, tablets, etc. Also with advances like smart typing (aka autocorrect) and the “SWYPE” techinque for mobile keyboards, perhaps the opportunity to better fine tune the keyboard will present itself.
So is QWERTY the best? Probably not.
Will the world likely adapt a better model in light of newer input devices?
Well, maybe it’s time we thought about it.
It is also at this point that I realize my kid is long gone, and is now watching Disney channel upstairs. Oh, well.
At least she’s watching Little Einsteins. 🙂
(“Princess, can you help Daddy reset the computer keyboard back to QWERTY?”)
Until next time.
Joseph B George
@jbgeorge / www.jbgeorge.net
The other day I was asked a far too familiar question…
“What is a cloud?”
Oh, man – here we go…
Like a kid in a candy store, I began to discuss the origins and evolution of IT. Servers, storage, networks, virtualization… on and on, reveling in exercising my cloud muscle. IaaS, PaaS, SaaS… every “*aaS” I could think of! Vendors, strategies, theories, models… it was an amazing tribute to one of our most talked about spaces, if I do say so myself.
The response from the interested party?
“Well, it kinda looks like cotton candy. Hey! Let’s get some cotton candy!”
At this point, I should note that the question came from my five year old. And the question was actually phrased, “Daddy, what is a cloud?”
Often, those of us in the tech world that live and breathe the bleeding edge forget to translate all this cool gadgetry into tangible benefits for the rest of the world. Not that the “rest of the world” couldn’t grasp the technical concepts, but great technologies are often great because of the simple benefits they bring.
So again – “what is a cloud?”
- For the small business, cloud could mean more focus on core competencies and less on IT.
- Cloud can help drive better efficency when it comes to power consumption and carbon footprint, helping the environment.
- In the case of the State of Minnesota who has agreed to begin adopting cloud computing, it could mean less long-run overhead and costs associated with running the state – which could turn into more services, less taxes, etc. (Learn more about this development at http://bit.ly/bz47j4.)
Now there’s no need to try and translate every tech concept into layman’s terms, but for better or worse, cloud’s getting a lot of play. It’s important we remember to articulate the value of these amazing technologies into terms that demonstrate how it makes the world better.
How have you answered the question “What is a cloud?” with non-techies you’ve encountered?
Share your stories here or tweet me on twitter (@jbgeorge).
OK, then – I’m off to get some cotton candy.
Until next time,
Joseph B George
@jbgeorge / www.jbgeorge.net