Archive

Archive for the ‘big data’ Category

8 Letters to HPC Success – SOFTWARE

March 26, 2018 Leave a comment

.

This is a duplicate of a high-performance computing blog I authored for Cray, originally published at the Cray Blog Site.

alphabet-cubes

These days, more and more organizations are using high-performance computing systems to seek out answers to big questions. From detecting fraud in financial transactions within milliseconds, to scouring the universe for life, to working toward a cure for cancer, the applications working on these questions require an infrastructure that allows for numerous cores, large swaths of data, and rapid results.

These systems are carefully designed to optimize application performance at scale. Processors, memory, storage, networking — all carefully selected and implemented to drive optimal performance.

However, far too many vendors ignore the impact of another key area of the infrastructure stack … the SOFTWARE.

Software is a key component and a remarkable enabler for gleaning optimal performance from an application and a system. And that means getting that much more performance out of your system, getting you to insights and answers faster.

Along with fine-tuning your hardware infrastructure, imagine the impacts of an HPC-optimized software OS and operating environment. You could witness the impact of lower jitter, more focused intelligence built into to the various levels of the hardware system, and greater levels of job deployment flexibility.

Or think of the impact of a unified, cross-processor programming environment, designed to help your application developers get the most of their HPC applications — a programming interface that gives them broad control and flexibility when it comes to porting, compiling, debugging and optimizing the applications that get you results.

These software applications are the reality for today’s Cray customers.

From the Cray Linux® Environment to the Cray Programming Environment, from powerful systems management to amazing storage telemetry, and from emerging cloud capabilities to effective analytics packages, we enable our customers to take performance to the next level with our comprehensive software portfolio.

Here at Cray, we understand the power of the software stack and have been committed to building the best for several decades. In addition to our work with our ecosystem of software partners and open source communities, we remain big believers in rolling up our sleeves and getting into the code. Our software engineering teams work hand in hand with our system hardware teams to ensure Cray systems are optimized at all levels, and find ways to extend the reach of our partners and open-source projects.

And there’s no slowing down in sight.

It’s time you leveraged the power of HPC-optimized software to answer the big questions you have in your business.

In the coming weeks we’ll bring you a more in-depth look at Cray’s HPC-optimized software stack with blogs, videos, webinars and other helpful tools so that you can maximize the performance of your Cray systems and applications.

Learn more about Cray software.

OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation

December 15, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

It’s a transformative time for OpenStack.

90lbsAnd I know a thing or two about transformations.

Over the last two and a half years, I’ve managed to lose over 90 pounds.

(Yes, you read that right.)

It was a long and arduous effort, and it is a major personal accomplishment that I take a lot of pride in.

Lessons I’ve Learned

When you go through a major transformation like that, you learn a few things about the process, the journey, and about yourself.

With OpenStack on my mind these days – especially after being nominated for the OpenStack Foundation Board election – I can see correlations between my story and where we need to go with OpenStack.

While there are a number of lessons learned, I’d like to delve into three that are particularly pertinent for our open source cloud project.

1. Clarity on the Goal and the Motivation

It’s a very familiar story for many people.  Over the years, I had gained a little bit of weight here and there as life events occurred – graduated college, first job, moved cities, etc. And I had always told myself (and others), “By the time I turned 40 years old, I will be back to my high school weight.”

The year I was to turn 40, I realized that I was running out of time to make good on my word!

And there it was – my goal and my motivation.

So let’s turn to OpenStack – what is our goal and motivation as a project?

According to wiki.OpenStack.org, the Openstack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. OpenStack is open source, openly designed, openly developed by an open community.”

That’s our goal and motivation

  • meet the needs of public and private clouds
  • no matter the size
  • simple to deploy
  • very scalable
  • open across all parameters

While we exist in a time where it’s very easy to be distracted by every new, shiny item that comes along, we must remember our mission, our goal, our motivation – and stay true to what we set out to accomplish.

2. Staying Focused During the “Middle” of the Journey

When I was on the path to lose 90 pounds, it was very tempting to be satisfied during the middle part of the journey.

After losing 50 pounds, needless to say, I looked and felt dramatically better than I had been before.  Oftentimes, I was congratulated – as if I had reached my destination.

But I had not reached my destination.

While I had made great progress – and there were very tangible results to demonstrate that – I had not yet fully achieved my goal.  And looking back, I am happy that I was not content to stop halfway through. While I had a lot to be proud of at that point, there was much more to be done.

OpenStack has come a long way in its fourteen releases:

  • The phenomenal Newton release focused on scalability, interoperability, and resiliency – things that many potential customers and users have been waiting for.
  • The project has now been validated as 100% compliant by the Core Infrastructure Initiative (CII) as part of the Linux Foundation, a major milestone toward the security of OpenStack.
  • Our community now offers the “Certified OpenStack Adminstrator” certification, a staple of datacenter software that much of the enterprise expects, further validating OpenStack for them.

We’ve come a long way.   But there is more to go to achieve our ultimate goal.  Remember our mission: open source cloud, public and private, across all size clouds, massively scalable, and simple to implement.

We are enabling an amazing number of users now, but there is more to do to achieve our goal. While we celebrate our current success, and as more and more customers are being successful with OpenStack in production, we need to keep our eyes on the prize we committed to.

3. Constantly Learning and Adapting

While losing 90 pounds was a major personal accomplishment, it could all have been in vain if I did not learn how to maintain the weight loss once it was achieved.

This meant learning what worked and what didn’t work, as well as adapting to achieve a permanent solution.

Case in point: a part of most weight loss plans is to get plenty of water daily, something I still do to this day. While providing numerous health advantages, it is also a big help with weight loss. However, I found that throughout the day, I would get consumed with daily activities and reach the end of the day without having reached my water requirement goal.

Through some experimentation with tactics – which included setting up reminders on my phone and keeping water with me at all times, among other ideas – I arrived at my personal solution: GET IT DONE EARLY.

I made it a point to get through my water goal at the beginning of the day, before my daily activities began. This way, if I did not remember to drink regularly throughout the day, it was of no consequence since I had already met my daily goal.

We live in a world where open source is getting ever more adopted by more people and open source newbies. From Linux to Hadoop to Ceph to Kubernetes, we are seeing more and more projects find success with a new breed of users.  OpenStack’s role is not to shun these projects as isolationists, but rather understand how OpenStack adapts so that we get maximum attainment of our mission.

This also means that we understand how our project gets “translated” to the bevy of customers who have legitimate challenges to address that OpenStack can help with. It means that we help potential user wade through the cultural IT changes that will be required.

Learning where our market is taking us, as well as adapting to the changing technology landscape, remains crucial for the success of the project.

Room for Optimism

I am personally very optimistic about where OpenStack goes from here. We have come a long way, and have much to be proud of.  But much remains to be done to achieve our goal, so we must be steadfast in our resolve and focus.

And it is a mission that we can certainly accomplish.  I believe in our vision, our project, and our community.

And take it from me – reaching BIG milestones are very, very rewarding.

Until next time,

JOSEPH
@jbgeorge

HPC: Enabling Fraud Detection, Keeping Drivers Safe, Helping Cure Disease, and So Much More

November 14, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Ask kids what they want to be when they grow up, and you will often hear aspirations like:

  • “I want to be an astronaut, and go to outer space!”
  • “I want to be a policeman / policewoman, and keep people safe!”
  • “I want to be a doctor – I could find a cure for cancer!”
  • “I want to build cars / planes / rocket ships / buildings / etc – the best ones in the world!”
  • “I want to be an engineer in a high performance computing department!”

OK, that last one rarely comes up.

Actually, I’ve NEVER heard it come up.

But here’s the irony of it all…

That last item is often a major enabler for all the other items.

Surprised?  A number of people are.

For many, “high performance computing” – or “HPC” for short – often has a reputation for being a largely academic engineering model, reserved for university PhDs seeking to prove out futuristic theories.

The reality is the high performance computing has influenced a number of things we take for granted on a daily basis, across a number of industries, from healthcare to finance to energy and more.

medicalHPC has been an engineering staple in a number of industries for many years, and has enabled a number of the innovations we all enjoy on a daily basis. And it’s a critical function of how we will function as a society going forward.

Here are a few examples:

  • Do you enjoy driving a car or other vehicle?  Thank an HPC department at the auto manufacturer. There’s a good chance that an HPC effort was behind modeling the safety hazards that you may encounter as a driver.  Unexpected road obstacles, component failure, wear and tear of parts after thousands of miles, and even human driver behavior and error.
  • Have you fueled your car with gas recently?  Thank an HPC department at the energy / oil & gas company.  While there is innovation around electric vehicles, many of us still use gasoline to ensure we can get around.  Exploring for, and finding, oil can be an arduous, time-consuming, expensive effort.  With HPC modeling, energy companies can find pockets of resources sooner, limiting the amount of exploratory drilling, and get fuel for your car to you more efficiently.
  • Have you been treated for illnesses with medical innovation?  Thank an HPC department at the pharmaceutical firm and associated research hospitals.  Progress in the treatment of health ailments can trace innovation to HPC teams working in partnership with health care professionals.  In fact, much of the research done today to cure some of the world’s diseases and genomics are done on HPC clusters.
  • Have you ever been proactively contacted by your bank on suspected fraud on your account?  Thank an HPC department at the financial institution.  Often times, banking companies will use HPC cluster to run millions of “monte carlo simulations” to model out financial fraud and intrusive hacks.  The amount of models required and the depth at which they analyze requires a significant amount of processing power, requiring a stronger-than-normal computing structure.

And the list goes on and on.

  • Security enablementspace
  • Banking risk analysis
  • Space exploration
  • Aircraft safety
  • Forecasting natural disasters
  • So much more…

If it’s a tough problem to solve, more likely than not, HPC is involved.

And that’s what’s so galvanizing about HPC. It is a computing model that enables us to do so many of the things we take for granted today, but is also on the forefront of new innovation coming from multiple industries in the future.

HPC is also a place where emerging technologies get early adoption, mainly because experts in HPC require new tech to get even deeper into their trade.  Open source is a major staple of this group of users, especially with deep adoption of Linux.

You also see early adoption of open source cloud tech like OpenStack (to help institutions share compute power and storage to collaborate) and of open source distributed storage tech like Ceph (to connect highly performant file systems to colder, back up storage, often holding “large data.”)  I anticipate we will see this space be among the first to broadly adopt tech in Internet-of-Things (IoT), blockchain, and more.

Business CommunicationHPC has been important enough that the governments around the world have funded multiple initiatives to drive more innovation using the model.  Here are a few examples:

This week, there is a large gathering of HPC experts in Salt Lake City (SuperComputing16) for engineers and researchers to meet / discuss / collaborate on enabling HPC more.  From implementing tech like cloud and distributed storage, to best practices in modeling and infrastructure, and driving progress more in medicine, energy, finance, security, and manufacturing, this should be a stellar week of some of the best minds around.  (SUSE is out here as well – David has a great blog here on everything that we’re doing at the event.)

High performance computing: take a second look – it may be much more than you originally thought.

And maybe it can help revolutionize YOUR industry.

Until next time,

JOSEPH
@jbgeorge

Living Large: The Challenge of Storing Video, Graphics, and other “LARGE Data”

August 25, 2016 Leave a comment

.

UPDATE SEP 2, 2016: SUSE has released a brand new customer case study on OrchardPark_SUSEEnterpriseStorage“large data” featuring New York’s Orchard Park Police Department, focused on video storage.  Read the full story here!

————–

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Everyone’s talking about “big data” – I’m even hearing about “small data” – but those of you who deal in video, audio, and graphics are in the throes of a new challenge:  large data.

Big Data vs Large Data

It’s 2016 – we’ve all heard about big data in some capacity – generally speaking, it is truckloads of data points from various sources, magnanimous in its volume (LOTS of individual pieces of data), its velocity (the speed at which that data is generated), and its variety (both structured and unstructured data types).  Open source projects like Hadoop have been enabling a generation of analytics work on big data.

So what in the world am I referring to when I say “large data?”

For comparison, while “big data” is a significant number of individual data that is of “normal” size, I’m defining “large data” as an individual piece of data that is massive in its size. Large data, generally, is not required to have real-time or fast access, and is often unstructured in its form (ie. doesn’t conform to the parameters of relational databases).

Some examples of large data:

LargeDataChart1

 

Why Traditional Storage Has Trouble with Large Data

So why not just throw this into our legacy storage appliances?

Traditional storage solutions (much of what is in most datacenters today) is great at handling standard data.  Meaning, data that is:

  • average / normal in individual data size
  • structured and fits nicely in a relational database
  • when totaled up, doesn’t exceed ~400TB or so in total space

Unfortunately, none of this works for large data.  Large data, due to its size, can consume traditional storage appliances VERY rapidly.  And, since traditional storage was developed when data was thought of in smaller terms (megabytes and gigabytes), large data on traditional storage can bring about performance / SLA impacts.

So when it comes to large data, traditional storage ends up being consumed too rapidly, forcing us to consider adding expensive traditional storage appliances to accommodate.

“Overall cost, performance concerns, complexity and inability to support innovation are the top four frustrations with current storage systems.” – SUSE, Software Defined Storage Research Findings (Aug 2016)

Object Storage Tames Large Data

Object storage, as a technology, is designed to handle storage of large, unstructured data from the ground up.  And since it was built on scalable cloud principles, it can scale to terabytes, petabytes, exabytes, and theoretically beyond.

When you introduce open source to the equation of object storage software, the economics of the whole solution become even better. And since scale-out, open source object storage is essentially software running on commodity servers with local drives, the object storage should scale without issue – as more capacity is needed, you just add more servers.

When it comes to large data – data that is unstructured and individually large, such as video, audio, and graphics – SUSE Enterprise Storage provides the open, scalable, cost-effective, and performant storage experience you need.

SUSE Enterprise Storage – Using Object Storage to Tame Large Data

Here’s how:

  • It is designed from the ground-up to tackle large data.  The Ceph project, which is core of SUSE Enterprise Storage, is built on a foundation of RADOS (Reliable Autonomic Distributed Object Store), and leverages the CRUSH algorithm to scale data across whatever size cluster you have available, without performance hits.
  • It provides a frequent and rapid innovation pace. It is 100% open source, which means you have the power of the Ceph community to drive innovation, like erasure coding.  SUSE gives these advantages to their customers by providing a full updated release every six months, while other Ceph vendors give customers large data features only as part of a once-a-year release.
  • It offers pricing that works for large data. Many object storage vendors, both commercial and open source, choose to charge you based on how much data you store.  The price to store 50TB of large data is different than the price to store 100TB of large data, and there is a different price if you want to store 400TB of large data – even if all that data stays on one server!  SUSE chooses to provide their customers “per node” pricing – you pay subscription only as servers are added.  And when you use storage dense servers, like the HPE Apollo 4000 storage servers, you get tremendous value.

 

Why wait?  It’s time to kick off a conversation with your SUSE rep on how SUSE Enterprise Storage can help you with your large data storage needs.  You can also click here to learn more.

Until next time,

JOSEPH

@jbgeorge

Address Your Company’s Data Explosion with Storage That Scales

June 27, 2016 Leave a comment

.

This is a duplicate of a blog I authored for SUSE, originally published at the SUSE Blog Site.

Experts predict that our world will generate 44 ZETTABYTES of digital data by 2020.

How about some context?

Data-GrainsofSand

Now, you may think that these are all teenage selfies and funny cat videos – in actuality, much of it is legitimate data your company will need to stay competitive and to serve your customers.

 

The Data Explosion Happening in YOUR Industry

Some interesting factoids:

  • An automated manufacturing facility can generate many terabytes of data in a single hour.
  • In the airline industry, a commercial airplane can generate upwards of 40 TB of data per hour.
  • Mining and drilling companies can gather multiple terabytes of data per minute in their day-to-day operations.
  • In the retail world, a single store can collect many TB of customer data, financial data, and inventory data.
  • Hospitals quickly generate terabytes of data on patient health, medical equipment data, and patient x-rays.

The list goes on and on. Service providers, telecommunications, digital media, law enforcement, energy companies, HPC research groups, governments, the financial world, and many other industries (including yours) are experiencing this data deluge now.

And with terabytes of data being generated by single products by the hour or by the minute, the next stop is coming up quick:  PETABYTES OF DATA.

Break the Status Quo!

 

Status Quo Doesn’t Cut It

I know what you’re thinking:  “What’s the problem? I‘ve had a storage solution in place for years.  It should be fine.”

Not quite.

  1. You are going to need to deal with a LOT more data than you are storing today in order to maintain your competitive edge.
  2. The storage solutions you’ve been using for years have likely not been designed to handle this unfathomable amount of data.
  3. The costs of merely “adding more” of your current storage solutions to deal with this amount of data can be extremely expensive.

The good news is that there is a way to store data at this scale with better performance at a much better price point.

 

Open Source Scale Out Storage

Why is this route better?

  • It was designed from the ground up for scale.
    Much like how mobile devices changed the way we communicate / interact / take pictures / trade stock, scale out storage is different design for storage. Instead of all-in-one storage boxes, it uses a “distributed model” – farming out the storage to as many servers / hard drives as it has access to, making it very scalable and very performant.  (Cloud environments leverage a very similar model for computing.)
  • It’s cost is primarily commodity servers with hard drives and software.
    Traditional storage solutions are expensive to scale in capacity or performance.  Instead of expensive engineered black boxes, we are looking at commodity servers and a bit of software that sits on each server – you then just add a “software + server” combo as you need to scale.
  • When you go open source, the software benefits get even better.
    Much like other open source technologies, like Linux operating systems, open source scale out storage allows users to take advantage of rapid innovation from the developer communities, as well as cost benefits which are primarily support or services, as opposed to software license fees.

 

Ready.  Set.  Go.

At SUSE, we’ve put this together in an offering called SUSE Enterprise Storage, an intelligent software-defined storage management solution, powered by the open source Ceph project.

It delivers what we’ve talked about: open source scale out storage.  It scales, it performs, and it’s open source – a great solution to manage all that data that’s coming your way, that will scale as your data needs grow.

And with SUSE behind you, you’ll get full services and support to any level you need.

 

OK, enough talk – it’s time for you to get started.

And here’s a great way to kick this off: Go get your FREE TRIAL of SUSE Enterprise Storage.  Just click this link, and you’ll be directed to the site (note you’ll be prompted to do a quick registration.)  It will give you quick access to the scale out storage tech we’ve talked about, and you can begin your transition over to the new evolution of storage technology.

Until next time,

JOSEPH
@jbgeorge

Tech in Real Life: Content Delivery Networks, Big Data Servers and Object Storage

April 6, 2015 Leave a comment

.

This is a duplicate of a blog I authored for HP, originally published at hp.nu/Lg3KF.

In a joint blog authored with theCube’s John Furrier and Scality’s Leo Leung, we pointed out some of the unique characteristics of data that make it act and look like a vector.

At that time, I promised we’d delve into specific customer uses for data and emerging data technologies – so let’s begin with our friends in the telecommunications and media industries, specifically around the topic of content distribution.

But let’s start at a familiar point for many of us…

If you’re like most people, when it comes to TV, movies, and video content, you’re an avid (sometimes binge-watching) fan of video streaming and video on-demand.  More and more people are opting to view content via streaming technologies.  In fact, a growing number of broadcast shows are viewed on mobile and streaming devices, as are a number of live events, such as this year’s NCAA basketball tournament via streaming devices.

These are fascinating data points to ponder, but think about what goes on behind them.

How does all this video content get stored, managed, and streamed?

Suffice it to say, telecom and media companies around the world are addressing this exact challenge with content delivery networks (CDN).  There are a variety of interesting technologies out there to help develop CDNs, and one interesting new technology to enable this is object storage, especially when it comes to petabytes of data.

Here’s how object storage helps when it comes to streaming content.

  • With streaming content comes a LOT of data.  Managing and moving that data is a key area to address, and object storage handles it well.  It allows telecom and media companies to effectively manage many petabytes of content with ease – many IT options lack that ability to scale.  Features in object storage like replication and erasure coding allow users to break large volumes of data into bite size chunks, and disperse it over several different server nodes, and often times, several different geographic locations.  As data is needed, it is rapidly re-compiled and distributed as needed.
  • Raise your hand if you absolutely love to wait for your video content to load.  (Silence.)  The fact is, no one likes to see the status bar slowly creeping along, while you’re waiting for zombies, your futbol club, or the next big singing sensation to show up on the screen.  Because object storage technologies are able to support super high bandwidth and millions of HTTP requests per minute, any customer looking to distribute media is able to allow their customers access to content with superior performance metrics.  It has a lot to do with the network, but also with the software managing the data behind the network, and object storage fits the bill.

These are just two of the considerations, and there are many others, but object storage becomes an interesting technology to consider if you’re looking to get content or media online, especially if you are in the telecom or media space.

Want a real life example? Check out how our customer RTL II, a European based television station, addressed their video streaming challenge with object storage.  It’s all detaile here in this case study – “RTL II shifts video archive into hyperscale with HP and Scality.”  Using HP ProLiant SL4540 big data servers and object storage software from HP partner Scality, RTL II was able to boost their video transfer speeds by 10x

Webinar this week! If this is a space you could use more education on, Scality and HP will be hosting a couple of webinars this week, specifically around object storage and content delivery networks.  If you’re looking for more on this, be sure to join us – here are the details:

Session 1 (Time-friendly for European and APJ audiences)

  • Who:  HP’s big data strategist, Sanjeet Singh, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time:  3pm Central Europe Summer / 8am Central US
  • Registration Link

Session 2 (Time-friendly for North American audiences)

  • Who:  HP Director, Joseph George, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time: 10am Pacific US / 12 noon Central US
  • Registration Link

And as always, for any questions at all, you can always send us an email at BigDataEcosystem@hp.com or visit us at www.hp.com/go/ProLiant/BigDataServer.

And now off to relax and watch some TV – via streaming video of course!

Until next time,

JOSEPH
@jbgeorge

Recognizing the Layers of Critical Insight That Data Offers

March 11, 2015 Leave a comment

This is a joint blog I did with John Furrier of SiliconAngle / theCube and Leo Leung from Scality, originally published at http://bit.ly/1E6nQuR 

Data is an interesting concept.

During a recent CrowdChat a number of us started talking about server based storage, big data, etc., and the topic quickly developed into a forum on data and its inherent qualities. The discussion led us to realize that data actually has a number of attributes that clearly define it – similar to how a vector has both a direction and magnitude.

Several of the attributes we uncovered as we delved into this notion of data as a vector include:

  • Data Gravity: This was a concept developed by my friend, Dave McCrory, a few years ago, and it is a burgeoning area of study today.  The idea is that as data is accumulated, additional services and applications are attracted to this data – similar to how a planet’s gravitational pull attracts objects to it.   An example would be the number 10.  If you the “years old” context is “attracted” to that original data point, it adds a certain meaning to it.  If the “who” context is applied to a dog vs. a human being, it takes on additional meaning.
  • Relative Location with Similar Data:  You could argue that this is related to data gravity, but I see it as more of a poignant a point that bears calling out.  At a Hadoop World conference many years ago, I heard Tim O’Reilly make the comment that our data is most meaningful when it’s around other data.   A good example of this is medical data.  Health information of a single individual (one person) may lead to some insights, but when placed together with data from a members of a family, co-workers on a job location, or the citizens of a town, you are able to draw meaningful conclusions.  When grouped with other data, individual pieces of data take on more meaning.
  • Time:  This came up when someone posed the question “does anyone delete data anymore?”   With the storage costs at scale becoming more and more affordable, we concluded that there is no longer an economic need to delete data (though there may be regulatory reasons to do so).   Then came the question of determining what data was not valuable enough to keep, which led to the epiphany that data that might be viewed as not valuable today, may become significantly valuable tomorrow.  Medical information is a good example here as well – capturing the data that certain individuals in the 1800’s were plagued with a specific medical condition may not seem meaningful at the time, until you’ve tracked data on specific descendants of his family being plagued by similar ills over the next few centuries.   It is difficult to quantify the value of specific data at the time of its creation.

Data as a vector.jpg

In discussing this with my colleagues, it became very clear how early we are in the evolution of data / big data / software defined storage.  With so many angles yet to be discussed and discovered, the possibilities are endless.

This is why it is critical that you start your own journey to salvage the critical insights your data offers.  It can help you drive efficiency in product development, it can help you better serve you constituents, and it can help you solve seemingly unsolvable problems.   Technologies like object storage, cloud based storage, Hadoop, and more are allowing us to learn from our data in ways we couldn’t imagine 10 years ago.

And there’s a lot happening today – it’s not science fiction.  In fact, we are seeing customers implement these technologies and make a turn for the better – figuring out how to treat more patients, enabling student researchers to share data across geographic boundaries, moving media companies to stream content across the web, and allowing financial institutions to detect fraud when it happens.  Though the technologies may be considered “emerging,” the results are very, very real.

Over the next few months, we’ll discuss specific examples of how customers are making this work in their environments, tips on implementing these innovative technologies, some unique innovations that we’ve developed in both server hardware and open source software, and maybe even some best practices that we’ve developed after deploying so many of these big data solutions.

Stay tuned.

Until next time,

Joseph George – @jbgeorge

Director, HP Servers

Leo Leung – @lleung

VP, Scality

John Furrier – @furrier

Founder of SiliconANGLE Media

Cohost of @theCUBE

CEO of CrowdChat

The HP Big Data Reference Architecture: It’s Worth Taking a Closer Look…

January 27, 2015 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/The-HP-Big-Data-Reference-Architecture-It-s-Worth-Taking-a/ba-p/179502#.VMfTrrHnb4Z

I recently posted a blog on the value that purpose-built products and solutions bring to the table, specifically around the HP ProLiant SL4540 and how it really steps up your game when it comes to big data, object storage, and other server based storage instances.

Last month, at the Discover event in Barcelona, we announced the revolutionary HP Big Data Reference Architecture – a major step forward in how we, as a community of users, do Hadoop and big data – and it is a stellar example of how purpose-built solutions can revolutionize how you accelerate IT technology, like big data.   We’re proud that HP is leading the way in driving this new model of innovation, with the support and partnership of the leading voices in Hadoop today.

Here’s the quick version on what the HP Big Data Reference Architecture is all about:

Think about all the Hadoop clusters you’ve implemented in your environment – they could be pilot or production clusters, hosted by developer or business teams, and hosting a variety of applications.  If you’re following standard Hadoop guidance, each instance is most likely a set of general purpose server nodes with local storage.

For example, your IT group may be running a 10 node Hadoop pilot on servers with local drives, your marketing team may have a 25 node Hadoop production cluster monitoring social media on similar servers with local drives, and perhaps similar for the web team tracking logs, the support team tracking customer cases, and sales projecting pipeline – each with their own set of compute + local storage instances.

There’s nothing wrong with that set up – It’s the standard configuration that most people use.  And it works well.

However….

Just imagine if we made a few tweaks to that architecture.

  • What if we replaced the good-enough general purpose nodes, and replaced them with purpose-built nodes?
    • For compute, what if we used HP Moonshot, which is purpose-built for maximum compute density and  price performance?
    • For storage, what if we used HP ProLiant SL4540, which is purpose-built for dense storage capacity, able to get over 3PB of capacity in a single rack?
  • What if we took all the individual silos of storage, and aggregated them into a single volume using the purpose-built SL4540?  This way all the individual compute nodes would be pinging a single volume of storage.
  • And what if we ensured we were using some of the newer high speed Ethernet networking to interconnect the nodes?

Well, we did.

And the results are astounding.

While there is a very apparent cost benefit and easier management, there is a surprising bump in performance in terms of read and write. 

It was a surprise to us in the labs, but we have validated it in a variety of test cases.  It works, and it’s a big deal.

And Hadoop industry leaders agree.

“Apache Hadoop is evolving and it is important that the user and developer communities are included in how the IT infrastructure landscape is changing.  As the leader in driving innovation of the Hadoop platform across the industry, Cloudera is working with and across the technology industry to enable organizations to derive business value from all of their data.  We continue to extend our partnership with HP to provide our customers with an array of platform options for their enterprise data hub deployments.  Customers today can choose to run Cloudera on several HP solutions, including the ultra-dense HP Moonshot, purpose-built HP ProLiant SL4540, and work-horse HP Proliant DL servers.  Together, Cloudera and HP are collaborating on enabling customers to run Cloudera on the HP Big Data architecture, which will provide even more choice to organizations and allow them the flexibility to deploy an enterprise data hub on both traditional and newer infrastructure solutions.” – Tim Stevens, VP Business and Corporate Development, Cloudera

“We are pleased to work closely with HP to enable our joint customers’ journey towards their data lake with the HP Big Data Architecture. Through joint engineering with HP and our work within the Apache Hadoop community, HP customers will be able to take advantage of the latest innovations from the Hadoop community and the additional infrastructure flexibility and optimization of the HP Big Data Architecture.” – Mitch Ferguson, VP Corporate Business Development, Hortonworks

And this is just a sample of what HP is doing to think about “what’s next” when it comes to your IT architecture, Hadoop, and broader big data.  There’s more that we’re working on to make your IT run better, and to lead the communities to improved experience with data.

If you’re just now considering a Hadoop implementation or if you’re deep into your journey with Hadoop, you really need to check into this, so here’s what you can do:

  • my pal, Greg Battas posted on the new architecture and goes technically deep into it, so give his blog a read to learn more about the details.
  • Hortonworks has also weighed in with their own blog.

If you’d like to learn more, you can check out the new published reference architectures that follow this design featuring HP Moonshot and ProLiant SL4540:

If you’re looking for even more information, reach out to your HP rep and mention the HP Big Data Reference Architecture.  They can connect you with the right folks to have a deeper conversation on what’s new and innovative with HP, Hadoop, and big data. And, the fun is just getting started – stay tuned for more!

Until next time,

JOSEPH

@jbgeorge

Purpose-Built Solutions Make a Big Difference In Extracting Data Insights: HP ProLiant SL4500

October 20, 2014 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/Purpose-Built-Solutions-Make-a-Big-Difference-In-Extracting-Data/ba-p/173222#.VEUdYrEo70c

Indulge me as I flash back to the summer of 2012 at the Aquatics Center in London, England – it’s the Summer Olympics, where some of the world’s top swimmers, representing a host of nations, are about to kick off the Men’s 100m Freestyle swimming competition. The starter gun fires, and the athletes give it their all in a heated head to head match for the gold.

And the results of the race are astounding: USA’s Nathan Adrian took the gold medal with a time of 47.52 seconds, with Australia’s James Magnussen finishing a mere 0.01 seconds later to claim the silver medal! It was an incredible display of competition, and a real testament to power of the human spirit.

For an event demanding such precise timing, we can only assume that very sensitive and highly calibrated measuring devices were used to capture accurate results. And it’s a good thing they did – fractions of a second separated first and second place.

Now, you and I have both measured time before – we’ve checked our watches to see how long it has been since the workday started, we’ve used our cell phones to see how long we’ve been on the phone, and so on. It got the job done. Surely the Olympic judges at the 2012 Men’s 100m Freestyle had some of these less precise options available – why didn’t they just simply huddle around one of their wrist watches to determine the winner of the gold, silver and bronze?

OK, I am clearly taking this analogy to a silly extent to make a point.

When you get serious about something, you have to step up your game and secure the tools you need to ensure the job gets done properly.

There is a real science behind using purpose-built tools to solve complex challenges, and the same is true with IT challenges, such as those addressed with big data / scale out storage. There are a variety of infrastructure options to deal with the staggering amounts of data, but there are very few purpose built server solutions like HP’s ProLiant SL4500 product – a server solution built SPECIFCIALLY for big data and scale out storage.

The HP ProLiant SL4500 was built to handle your data. Period.

  • It provides an unprecedented drive capacity with over THREE PB in a single rack
  • It delivers scalable performance across multiple drive technologies like SSD, SAS or SATA
  • It provides significant energy savings with shared cooling and power and reduced complexity with fewer cables
  • It offers flexible configurations •A 1-node, 60 large form factor drive configuration, perfect for large scale object storage with software vendors like Cleversafe and Scality, or with open source projects like OpenStack Swift and Ceph
  • A 2-node, 25 drive per node configuration, ideal for running Microsoft Exchange
  • A 3-node, 15 drive per node configuration, optimal for running Hadoop and analytics applications

If you’re serious about big data and scale out storage, it’s time to considering stepping up your game with the SL4500. Purpose-built makes a difference, and the SL4500 was purpose-built to help you make sense of your data.

You can learn more about the SL4500 by talking to your HP rep or by visiting us online at HP ProLiant SL4500 Scalable Systems or at Object Storage Software for ProLiant.

And if you’re here at Hadoop World this week, come on by the HP booth – we’d love to chat about how we can help solve your data challenges with SL4500 based solutions.

Until next time,

Joseph George

@jbgeorge

Day 2: Big Data Innovation Summit 2014 #DataWest14

April 11, 2014 2 comments

.

Levi's StadiumHello again big data fans – from where I’ve learned the San Francisco 49’ers will be playing their 2014 NFL season at Levi’s Stadium… Santa Clara!

(BTW, the stadium – from what I could see – is beautiful!  I’m a big NFL fan, and there’s now another reason to come to the San Jose area, other than all the cloud / big data conferences.)

Got a lot of great feedback on yesterday’s “Day 1” post of the summit, so here are some observations from the final day of the conference.

  • Yahoo’s Duru Ahanotu spoke through driving efficiency in how data teams are organized, going through the permutations of generalists vs specialists and centralized vs de-centralized, and how to best address teams in each model.
    .
  • PayPal’s Moises Nascimento (who is a very captivating speaker) drove the point home, that though we are now adopting many of the new data technologies like Hadoop and NoSQL, most of our existing data sources and toolsets still provide value – so there is value in leveraging ALL data sources.
    .
  • Moises also made a point of highlighting that data manipulation is best handled at the SYSTEM level, while data analysis is better managed at the ENTERPRISE level
    .
  • In HP’s discussion, they introduced the concept of the GEOBYTE – 10^30 bytes, a size of data that the human race is expected to hit in the next few years.

To provide context on the magnitude of a GEOBYTE (10^30 bytes), there is estimated to only be 10^19 GRAINS OF SAND ON THE EARTH.  Think about that for a second.

  • The team also highlighted their view on “Big BI” vs “Big Data”
    • Big BI – same types of analysis but on more data; more batch processing; results that were not easily actionable
    • Big Data – joining datasets that have not been previously joined, near real time analysis, action oriented results
      .
  • I thought Ancestry.com had one of the best sessions of the event, as they went deep into the GERMLINE algorithm that was the foundation of their business technology, and how they had to create jermline (now with a “j”) based on Hadoop / HDFS to create a SCALABLE matching engine.  As we all know, SCALE matters. The performance and speed benchmarks between the “G” project and the “j” project were mindblowing.
    .
  • Finally, sat in on the Netflix session – in addition to being a big fan of Netflix, as both a consumer and a tech observer, I’ve always been impressed with the way Netflix has evolved their business, and continues to do so.  In this session, they went into great detail on their use of the Amazon cloud services, and their open source projects as a layer above to enhance functionality and deploy features.  Topics touched on included red / black deployment to allow ease of features into production, and the importance of graceful degradation, so that a failure can be less of a catastrophic event for the end user.
    .

    • One very telling statement is really a commentary on the value of use and participation in the open source process – Netflix was clear that they see value in being an open source contributor / leader is that it preserves the future of their systems – rather than sitting back and letting the industry decide their direction with tools and tech, Netflix uses open source to help drive and lead the industry to where they see value.
      .
  • (I did resist the urge to ask the Netflix presenter when the next season of “House of Cards” would come out. 🙂 )
    .

One of the frequent questions that came up at the Dell booth was “what is Dell doing in big data?”

The answer?  Actually… quite a bit, and for quite a while.

Between the Dell Apache Hadoop HW+SW+Services Solution, the Toad BI suite, the Kitenga analytics toolsets, and our growing HPC business, Dell has been a part of this movement since its early days.  I’d recommend you drop us a line at Hadoop@Dell.com or visit us at http://www.Dell.com/Hadoop to learn more.

If you were out at the show this week, be sure to leave a comment on your thoughts as well.

Hope everyone has safe trips home, and we’ll see you at the next big data get-together!

Until next time,

JBG
@jbgeorge
BDIS 2014