Archive

Posts Tagged ‘analytics’

Tech in Real Life: Content Delivery Networks, Big Data Servers and Object Storage

April 6, 2015 Leave a comment

.

This is a duplicate of a blog I authored for HP, originally published at hp.nu/Lg3KF.

In a joint blog authored with theCube’s John Furrier and Scality’s Leo Leung, we pointed out some of the unique characteristics of data that make it act and look like a vector.

At that time, I promised we’d delve into specific customer uses for data and emerging data technologies – so let’s begin with our friends in the telecommunications and media industries, specifically around the topic of content distribution.

But let’s start at a familiar point for many of us…

If you’re like most people, when it comes to TV, movies, and video content, you’re an avid (sometimes binge-watching) fan of video streaming and video on-demand.  More and more people are opting to view content via streaming technologies.  In fact, a growing number of broadcast shows are viewed on mobile and streaming devices, as are a number of live events, such as this year’s NCAA basketball tournament via streaming devices.

These are fascinating data points to ponder, but think about what goes on behind them.

How does all this video content get stored, managed, and streamed?

Suffice it to say, telecom and media companies around the world are addressing this exact challenge with content delivery networks (CDN).  There are a variety of interesting technologies out there to help develop CDNs, and one interesting new technology to enable this is object storage, especially when it comes to petabytes of data.

Here’s how object storage helps when it comes to streaming content.

  • With streaming content comes a LOT of data.  Managing and moving that data is a key area to address, and object storage handles it well.  It allows telecom and media companies to effectively manage many petabytes of content with ease – many IT options lack that ability to scale.  Features in object storage like replication and erasure coding allow users to break large volumes of data into bite size chunks, and disperse it over several different server nodes, and often times, several different geographic locations.  As data is needed, it is rapidly re-compiled and distributed as needed.
  • Raise your hand if you absolutely love to wait for your video content to load.  (Silence.)  The fact is, no one likes to see the status bar slowly creeping along, while you’re waiting for zombies, your futbol club, or the next big singing sensation to show up on the screen.  Because object storage technologies are able to support super high bandwidth and millions of HTTP requests per minute, any customer looking to distribute media is able to allow their customers access to content with superior performance metrics.  It has a lot to do with the network, but also with the software managing the data behind the network, and object storage fits the bill.

These are just two of the considerations, and there are many others, but object storage becomes an interesting technology to consider if you’re looking to get content or media online, especially if you are in the telecom or media space.

Want a real life example? Check out how our customer RTL II, a European based television station, addressed their video streaming challenge with object storage.  It’s all detaile here in this case study – “RTL II shifts video archive into hyperscale with HP and Scality.”  Using HP ProLiant SL4540 big data servers and object storage software from HP partner Scality, RTL II was able to boost their video transfer speeds by 10x

Webinar this week! If this is a space you could use more education on, Scality and HP will be hosting a couple of webinars this week, specifically around object storage and content delivery networks.  If you’re looking for more on this, be sure to join us – here are the details:

Session 1 (Time-friendly for European and APJ audiences)

  • Who:  HP’s big data strategist, Sanjeet Singh, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time:  3pm Central Europe Summer / 8am Central US
  • Registration Link

Session 2 (Time-friendly for North American audiences)

  • Who:  HP Director, Joseph George, and Scality VP, Leo Leung
  • Date:  Wed, Apr 8, 2015
  • Time: 10am Pacific US / 12 noon Central US
  • Registration Link

And as always, for any questions at all, you can always send us an email at BigDataEcosystem@hp.com or visit us at www.hp.com/go/ProLiant/BigDataServer.

And now off to relax and watch some TV – via streaming video of course!

Until next time,

JOSEPH
@jbgeorge

Purpose-Built Solutions Make a Big Difference In Extracting Data Insights: HP ProLiant SL4500

October 20, 2014 Leave a comment

This is a duplicate of the blog I’ve authored on the HP blog site at http://h30507.www3.hp.com/t5/Hyperscale-Computing-Blog/Purpose-Built-Solutions-Make-a-Big-Difference-In-Extracting-Data/ba-p/173222#.VEUdYrEo70c

Indulge me as I flash back to the summer of 2012 at the Aquatics Center in London, England – it’s the Summer Olympics, where some of the world’s top swimmers, representing a host of nations, are about to kick off the Men’s 100m Freestyle swimming competition. The starter gun fires, and the athletes give it their all in a heated head to head match for the gold.

And the results of the race are astounding: USA’s Nathan Adrian took the gold medal with a time of 47.52 seconds, with Australia’s James Magnussen finishing a mere 0.01 seconds later to claim the silver medal! It was an incredible display of competition, and a real testament to power of the human spirit.

For an event demanding such precise timing, we can only assume that very sensitive and highly calibrated measuring devices were used to capture accurate results. And it’s a good thing they did – fractions of a second separated first and second place.

Now, you and I have both measured time before – we’ve checked our watches to see how long it has been since the workday started, we’ve used our cell phones to see how long we’ve been on the phone, and so on. It got the job done. Surely the Olympic judges at the 2012 Men’s 100m Freestyle had some of these less precise options available – why didn’t they just simply huddle around one of their wrist watches to determine the winner of the gold, silver and bronze?

OK, I am clearly taking this analogy to a silly extent to make a point.

When you get serious about something, you have to step up your game and secure the tools you need to ensure the job gets done properly.

There is a real science behind using purpose-built tools to solve complex challenges, and the same is true with IT challenges, such as those addressed with big data / scale out storage. There are a variety of infrastructure options to deal with the staggering amounts of data, but there are very few purpose built server solutions like HP’s ProLiant SL4500 product – a server solution built SPECIFCIALLY for big data and scale out storage.

The HP ProLiant SL4500 was built to handle your data. Period.

  • It provides an unprecedented drive capacity with over THREE PB in a single rack
  • It delivers scalable performance across multiple drive technologies like SSD, SAS or SATA
  • It provides significant energy savings with shared cooling and power and reduced complexity with fewer cables
  • It offers flexible configurations •A 1-node, 60 large form factor drive configuration, perfect for large scale object storage with software vendors like Cleversafe and Scality, or with open source projects like OpenStack Swift and Ceph
  • A 2-node, 25 drive per node configuration, ideal for running Microsoft Exchange
  • A 3-node, 15 drive per node configuration, optimal for running Hadoop and analytics applications

If you’re serious about big data and scale out storage, it’s time to considering stepping up your game with the SL4500. Purpose-built makes a difference, and the SL4500 was purpose-built to help you make sense of your data.

You can learn more about the SL4500 by talking to your HP rep or by visiting us online at HP ProLiant SL4500 Scalable Systems or at Object Storage Software for ProLiant.

And if you’re here at Hadoop World this week, come on by the HP booth – we’d love to chat about how we can help solve your data challenges with SL4500 based solutions.

Until next time,

Joseph George

@jbgeorge

Day 1: Big Data Innovation Summit 2014

April 10, 2014 Leave a comment

.

Hello from sunny, Santa Clara!BDIS Keynote Day 1

My team and I are here at the BIG DATA INNOVATION SUMMIT representing Dell (the company I work for), and it’s been a great day one.

I just wanted to take a few minutes to jot down some interesting ideas I heard today:

  • In Daniel Austin’s keynote, he addressed that the “Internet of things” should really be the “individual network of things” – highlighting that the number of devices, their connectivity, their availability, and their partitioning is what will be key in the future.
    .
  • One data point that also came out of Daniel’s talk – every person is predicted to generate 20 PETABYTES of data over the course of a lifetime!
    .
  • Juan Lavista of Bing hit on a number of key myths around big data:
    • the most important part of big data is its size
    • to do big data, all you need is Hadoop
    • with big data, theory is no longer needed
    • data scientists are always right 🙂

QUOTE OF THE DAY:  “Correlation does not yield causation.” – Juan Lavista (Bing)

  • Anthony Scriffignano was quick to admonish the audience that “it’s not just about data, it’s not just about the math…  [data] relationships matter.”
    .
  • The state of Utah state government is taking a very progressive view to areas that analytics can help drive efficiency in at that level – census data use, welfare system fraud, etc.  And it appears Utah is taking a leadership position in doing so.

I also had the privilege of moderating a panel on the topic of the convergence between HPC and the big data spaces, with representatives on the panel from Dell (Armando Acosta), Intel (Brent Gorda), and the Texas Advanced Computing Center (Niall Gaffney).  Some great discussion about the connections between the two, plus tech talk on the Lustre plug-in and the SLURM resource management project.

Additionally, Dell product strategists Sanjeet Singh and Joey Jablonski presented on a number of real user implementations of big data and analytics technologies – from university student retention projects to building a true centralized, enterprise data hub.  Extremely informative.

All in all, a great day one!

If you’re out here, stop by and visit us at the Dell booth.  We’ll be showcasing our hadoop and big data solutions, as well as some of the analytics capabilities we offer.

(We’ll also be giving away a Dell tablet on Thursday at 1:30, so be sure to get entered into the drawing early.)

Stay tuned, and I’ll drop another update tomorrow.

Until next time,

JOSEPH
@jbgeorge

Michael Dell Comments on the “Data Economy”

March 24, 2014 Leave a comment

This is a repost of my blog at  .

In this short interview with Inc., Michael Dell provides an overview of the company’s transformation into a leading player in the “data economy.”   

As Michael notes, with the costs of collecting data decreasing, more companies in a growing number of industries are making better use of existing data sources, and gathering data from new sources. 

And that’s where Dell has been enabling customers for years with solutions built with technologies like Hadoop and NoSql.  Helping companies and organizations make better use of this data, and assisting them in using it to solve their challenges, are just a few of the ways Dell has changed the Big Data conversation, and built an entirely new enterprise business along the way.

As a member of the Technology CEO Council, Michael also recently joined other tech CEOs to discuss the data economy with policy makers.  As an example of the potential of the data economy, he explained how Dell’s growing health information technology practice includes 7 billion medical images. These images are in an aggregated data set allowing researchers to mine them for patterns and predictive analytics.

“There’s lots that can be done with this data that was very, very siloed in the past,” Michael toldComputerworld, “We’re really just kind of scratching the surface.”

It’s certainly an exciting time to be at Dell – and the data revolution continues!

Read more…