Saturday, 18 May 2013

Peep this: Google Glass for eyeglass wearers revealed.

Peep this: Google Glass for eyeglass wearers revealed.



Hey four eyes! How about a fifth? A special, prototype version of Google Glass for eyeglass wearers was spied on the faces of at least three engineers and developers at the Google I/O conference, CNET’s Seth Rosenblatt reported.


Rosenblatt said he saw Mark Shand, a Google Glass engineer with a long history in tech that stretches back to Xerox PARC, wearing a prototype model fitted for prescription glasses. And two other Google employees at the show also wore what appeared to be exactly the same prototype model.

"Wearing prescription Google Glass is no harder than [dealing with] sunglasses," said Glass engineer Adam Haberlach, who was spotted wearing the same frames.

Glass for the eyesight-impaired replaces the titanium band in the standard model with a generic pair of black plastic glasses, onto which the display arm and other components are melded. The module did not appear to be removable.


Many experts have surmised that manufacturers such as Warby Parker, which has refocused the eyeglass industry through online sales of inexpensive eyeglasses and sunglasses, could offer Google Glass as an option for consumers. Alternately, the manufacturers of protective eyewear for sports -- such as Oakley, which has a history with high-tech gear -- may offer integrated modules.

Google’s weird, cyber-gizmo may not be ready to leave the nest, but it’s finally reaching adolescence, following a series of other announcements and revelations from the developer-centric Google I/O conference that wrapped up in California Friday.

Chief among them: the release of half a dozen or more apps to augment the device’s functionality.

In addition to The New York Times app, which was released about two weeks ago, new tools from Evernote, Facebook, Twitter, Tumblr, CNN and Elle were pushed out to the approximately 500 developer model glasses in the wild.

Rather than accessing them through an app store as on iPhones or Windows 8 computers, the apps simply appear when the MyGlass app is updated on the Android device paired with Glass.

The Facebook app, for instance, lets users share pictures and other content with that network's billion-plus users, rather than the struggling Google Plus.

Still, with only the developer price tag of $1,500 to look at and concerns over privacy escalating, consumers appear hesitant about the bleeding-edge device.

A new poll from BiTE Interactive claims that only one in 10 American smartphone users would use Google Glass regularly.

“Google Glass represents a profound social barrier for the average consumer,” BiTE vice president Joseph Farrell said.

Supercomputer Watson's future revealed


IBM's CEO told Fortune what the future holds for the talking supercomputer.


Supercomputer Watson's future revealed


FORTUNE -- On May 15, Fortune senior writer Jessi Hempel interviewed IBM (IBM) CEO Ginni Rometty as a keynote for the National Venture Capital Association's 40th anniversary conference, Venturescape. What follows is an edited version of their conversation. 

Fortune: IBM was once about mainframes, and then PCs and printers. Now IBM is about services, software, Watson. How do you think about the company?

Rometty: Two years ago, IBM had its 100th anniversary, which is when people asked that question the most. And I think one of my biggest learnings has been, never define yourself by a product. I would like us to be thought of as an innovation company. The only way you survive is you continuously transform into something else. It's this idea of continuous transformation that makes you an innovation company. 

Now, an innovation company is something that we all aspire to be. You are aspiring to be that with more than 400,000 employees. What does it even mean to be an innovation company especially at that scale? 

I've got a formula in my head about this idea of continuous transformation, and maybe it's helpful for people as you build out these businesses, because I break it into five different pieces. 

I think the first thing you always have to do is keep reinventing yourself for high value. I think particularly in our tech industry, this is an industry that has violent innovation and then commoditization, and it's a cycle of innovation/commoditization. 

You guys [venture capitalists] play a big role for us in that because you acquire, you divest, and then you remix your own development. But that acquire and divest is a really important piece. You know, for us, we have divested $15 billion. But then, we have acquired 140 companies. So, the formula is, in part, move to higher value. 

The second thing you've got to always do is keep thinking about how to make a market, and I think you can do that by buyer, by category, by geography. 

Then the third thing I think about is, you know, assuming you live to be a little bit older, you've got to reinvent your core franchises. Right? Things like middleware will get reinvented to mobile middleware, as an example. So, you reinvent. 

And then I think the fourth you can't forget, when you said about all the people, it's the skills. So, you absolutely, you have no choice. And many people can reinvent themselves and some people can't, right? So, you reinvent skills, and then at the end of the day, I think what we all end up doing is you've got to keep looking at yourself, the company, and reinventing the company, which is a bit of when we were talking about how does the company change. 

MORE: Google Android's enterprise problem 

Instead of making direct investments in companies, you guys work closely with the VC company. How does your strategy work? 

Let me share with you kind of what we look for and why when we look at an acquisition, because it's a vital piece. In fact, it's so important to how we think of remaking the company, we actually commit over a period of time. We do something called a roadmap, it's a financial roadmap. And the five-year roadmap that started a couple of years ago ends in 2015. We've said we'll acquire $20 billion of companies. 

So, we're pretty clear about what are the areas I talked about some that we're focused in. So where we'll do acquisitions, they're adjacent. They'll always cluster around strategic areas. 

The second, they'll typically always be intellectual property. I've got a distribution system that goes to 170 countries. If I acquire properly, you know, you may be successful in one or two countries, or one place; I can scale, and that's part of the value that IBM brings. 

In your Annual Report, you talk about an analytics process that you use for acquiring. 

Yes. So, every part of your business will change based on what I consider predictive analytics of the future. So, we did this for all of our acquisitions. We used to look at 300 to 500 things on every acquisition. Okay, well, that wasn't necessarily good for speed, by the way. We did a lot of work with our own Research group on the analytics to be able to predict three to five factors individually by acquisition that are going to make the biggest difference.


Supercomputer Watson's future revealed



Ginni, for many of us, Watson is that Jeopardy! Game; but when you think of Watson it's a much bigger thought, right?

Yes, she has done way more than that! It's about the coming of a third area of technology, right? You know, the original systems counted things. The next set of things were programmable. This thing learns, right? You give it very little instruction and then the more information it has, the more it learns.

So, since the time that you would have seen it on Jeopardy! when it could answer simple questions and it beat the best humans out there, we put it to work for medicine. And we have been working with Memorial Sloan Kettering, MD Anderson, Columbia, some of the finest institutions in this world.

Medicine is one high-value area for Watson. Might there be a role for venture capital there? There might be, in areas such as interfaces and certain specialized areas.

But here's what's more interesting. That's a high-value area; we are also now about to come out with Watson in what we consider as an advisor, and it will be in volume around research-oriented industries. Think of things like pharma; or as a client advisor in industries that have huge numbers of end retail clients. And so, think of financial services, think of a telco.

And Watson's a service; we will launch an ecosystem where Watson's a service and you build applications around it. And you have to have domain expertise. That's what will be your value of the future.

You call this period a "golden era of technology." And you mentioned this idea of social information as the new production line. What do you mean by that?

I am very aggressive internally in IBM in moving into a social enterprise. It flattens organizations. It enhances their speed. We have hubs, as an example, now around some of our key clients around the world that attach everyone, irrespective of where they are in an organization, in a way that you have come to know how social networking works.

And I envision a day even with all your employees where it's more important what you share, not just what you know ... It's not so much what you say you know; I care about what the world thinks you know, what clients think you know.

And then maybe there's a day that you're paid that way, based on what people think and appreciate and what you share. It's a very different paradigm than today. So, I believe this idea of being the social production line of the future is how many, many companies will operate, particularly in a global environment.

MORE: Blackberry is about to give away its last advantage

So, to that end, Ginni, you have in your tenure so far at IBM been very aggressive at moving internally into the social tools. What have been the biggest challenges as you do that, or the unexpected things that you've learned?

Some of the wonderful side benefits have been my ability to communicate two ways with the organization pervasively and quickly is beyond compare, and to have a two-way dialogue. That's not just a one-way push; that is, you learn things very fast. You can take layers out of an organization, right? You can push it down.

But more important, the other reason I am so intent on this is, as you look at what we are all hiring, I often call it the millennial generation that is the way they work.

Call centre menu options catalogued by frustrated man


The BBC's Mark Norman meets Nigel Clarke to find out about his one-man mission against call centre menus

Retired IT manager Nigel Clarke, from Kent in the UK, has launched a website listing the call centre menu sequences for accessing thousands of services.

He started the project after growing frustrated about the number of options and amount of recorded information on call centre menus.

Mr Clarke discovered that some automated menus have nearly 80 options.

It can take over four minutes to get to the service required if the caller listens to each stage in full, he said.

As an example, speaking to an adviser at HM Revenue and Customs only required pressing four buttons but it could take six minutes to get through each menu level, Mr Clarke said.

HMRC said it was working on improvements to the service.

"HMRC is looking at ways to improve its interactive voice responses and is getting ready for the introduction of new speech recognition technology," said a spokesman.

"This technology will react to what the caller says instead of asking them to select an option by pushing a button on their phone. HMRC plan to introduce these improvements later this year."

Labour of love

Mr Clarke said the website pleasepress1.com was a "labour of love" which he built after seven years of creating post-it notes of sequences he used regularly.

He used Skype and recording software to make thousands of calls, with the bulk of the work being carried out in the last six months.

Reporting a water leak to Lloyds TSB's home insurance department requires dialling a total of seven numbers, one at each stage of the call (1, 3, 2, 1, 1, 5, 4), and it takes more than four minutes to navigate the 78 menu options, according to the website.

"The companies have these systems in place for a reason," said Mr Clarke. Continue reading the main story “ Start Quote I'm not against the system, but I am against bad design” Nigel Clarke

"I'm not against the system, but I am against bad design."

In an ideal world, he said, companies should just offer different phone numbers for different services.

 "No menu is best - but if it is a necessity then design it properly. I think two levels maximum is ideal. Some stretch to three. You don't really want much more than that."

 Mr Clarke said he was inspired to build the website after being surprised by the "emotional response" he got from people whenever he mentioned it.

 He says he doesn't intend to devote himself full-time to maintaining it.

 "I'd like the companies themselves to say, 'we care about our customers, we'll publish our menus'," he said.

 When tested by the BBC, some of the sequences did not seem to result in significant time savings, while others ended with the user being transferred straight to a customer adviser rather than going through each level of the automated system.

US politicians quiz Google on Glass privacy

US politicians are seeking reassurances from Google that its smart spectacles will respect personal privacy.

A letter has been sent to Google signed by eight members of a Congressional caucus seeking answers about Google Glass.

The letter poses eight questions for Google about the data the gadget will collect about users and non-users.

The group said it was "uncertain" about the privacy protections Google plans to build in to the device.

"We are curious whether this new technology could infringe on the privacy of the average American," says the letter from the Congressional privacy caucus which, in the past, has quizzed many tech companies on what they do with the data they gather.

As a bipartisan interest group, the caucus has both Democrats and Republicans as members. 

Google Glass is proving a controversial technology because of its potential to gather images, video and other data about almost anything a user sees. Some have claimed that privacy will be "impossible" if Google Glass and similar products become widely used.

The letter, addressed to Google boss Larry Page, pointed out that the company did not have an unblemished history when it came to handling personal information. It mentioned the widespread criticism Google faced and the fines it had to pay after it inadvertently scooped up data from unprotected wi-fi networks while gathering information for its Street View service.

The politicians want to know how Google will ensure it does not repeat that mistake.

In addition, the Congressmen want to know what Google's policy is for handling the privacy of non-users and how it will respect the wishes of those who do not want to be identified or have any information about them taken from social media sites.

The group also wants Google to explain how it will refine and update its privacy policies to reflect the novel capabilities of Google Glass. The search firm has been given until 14 June to respond to the letter from the caucus.

Some of the points in the letter were addressed during an interview in San Francisco with Google Glass director Steve Lee.

Mr Lee said the Glass team took the privacy of users and non-users seriously, reported the All Things D news blog.

"From the beginning, the social implications… of Glass, of people wearing Glass, has been at the top of our mind," the site reported.

Google says you'll know when Glass is sketchy

What you need to know about Google Glass



NEW YORK (CNNMoney)
Privacy concerns surrounding Google Glass are growing rampant. Eight Congressmen even joined in on the conversation on Thursday, fearing that the cyborg-like technology could be too invasive. Google's response: You'll know when someone wearing Glass is being sketchy.

Google's response: You'll know when someone wearing Glass is being sketchy.

In one of the most provocative moments of this week's Google I/O developers conference, Glass engineers fielded audience questions regarding privacy concerns. Google was clearly prepared to discuss the topic.

"The process for taking a picture or video has clear social queues," explained Steve Lee, Glass' product director. "When Glass is active, the display lights up. Because of that, you can rest assured I'm not recording you."

Lee was quick to point out that privacy issues surrounding Glass had been considered a top priority by his team since day one.

Glass engineer Charles Mendis noted that Google's official policies for Glass app development are designed to address privacy issues. For example, Google (GOOG, Fortune 500) forbids app developers from shutting off the display while Glass is active.

Related story: Google Glass is limited and clunky, but the future
But Mendis also noted that simple awareness of surroundings will play a key part in privacy.

"If I want to record a video with Glass, I have to be staring at you," said Mendis. "And if you're human, you'll notice me staring at you."

Of course, someone could hack Glass to disable the screen -- Google even led a how-to on hacking Glass, though the company was quick to note that hacking will void a user's warranty, and shutting off the screen was not among the discussed topics. From the sounds of it, Google would not allow such an app to be distributed through official channels.

"We are thinking very carefully about how we design Glass because new technology always raises new issues," a Google spokeswoman said. "Our Glass Explorer program, which reaches people from all walks of life, will ensure that our users become active participants in shaping the future of this technology."

Google's assurances haven't convinced some members of Congress, however. A bi-partisan committee on privacy sent a letter to Google this week expressing concerns about Glass.

"We are curious whether this new technology could infringe on the privacy of the average American," the letter states.

The eight "unanswered questions" the group posed focused mainly on data collection and protection: How will Google avoid "unintentionally collecting data" without consent, as Google admitted its Street View data-collection cars did? What type of data will Google collect?

The committee also asked whether Google plans to implement facial recognition technology, and if so, whether people be able to opt-out.

Google has "experimented" with facial recognition, execs said at I/O on Thursday, but that isn't in the company's current plan for Glass. Still, Google stopped short of saying it's not in future plans.

Related story: Why Google's new music service might actually work

Despite lawmakers and pundits' concerns, intrigue related to Glass made the technology the clear star of this year's I/O conference. Nearly every discussion dedicated to the headset saw serpentine lines of attendees trying to cram their way into packed conference rooms.

Public paranoia wasn't the only focus. Some attendees talked with excitement about Glass' possibility, while others, including Google's own employees, said they wouldn't be caught dead wearing Glass.

Google also announced seven new apps for the emerging technology, which include custom-tailored software from the like of Facebook (FB), Twitter, Tumblr, Evernote and Elle Magazine. CNN also released an app for news updates.

Glass' first game even surfaced at the conference in the form of Ice Breaker, which is a scavenger hunt of sorts that takes full advantage of the on-board technology. The game requires you to use Glass to find another Glass user, snap a picture with that person, and earn points for doing so. It's a little dumb, a little pointless, but undeniably fun.

Although these big-name apps provide immediate, tangible examples of how Glass can be used, smaller developers showed off many of their innovations as well.

Google says that Glass remains a test-bed for software developers and an unfinished product. By the time Glass is ready for prime time, it's a safe bet Google will make some changes with privacy in mind.

Tuesday, 14 May 2013

Parallel File Systems for 'Extreme' Enterprise Applications

Money, Money, Money…


In the financial sector, revenue is all about numbers, speed and making the best decision at the right time while controlling risk.

We are seeing that in financial services firms, data capture, algorithm development, testing and risk management projects are all pushing the performance boundaries of traditional storage. Hedge funds and trading firms are starting to take advantage of parallelism in order to analyze more positions faster and deploy competitive trading strategies. Using scalable systems that support massively parallel data access, researchers can analyze larger data sets and test more scenarios delivering faster, more effective models. Similarly, risk managers are increasing their ability to assess total market exposure from only once or twice a day to much shorter intervals.

All of this goes straight to the bottom line and provides competitive advantage.

Extremely Cloudy Applications


If there is such thing as “normal” cloud storage today, it’s considered to be slower than “Web speed.” But it makes sense that businesses considering extreme applications will seek the agility and elasticity of cloud hosting rather than building internal infrastructure, especially where the main source of data is a Web 2.0 application.

As cloud providers like Amazon Web Services overcome data IO and storage challenges to provide cloud hosting for IO-intensive big data and video translation, we expect to see many service providers vying to support even more extreme applications.

Parallel File Systems to the Rescue/Rescue/Rescue/…


Extreme applications provide several interesting storage system challenges that can be answered by parallel file systems.

Parallel file systems are based on scale-out storage nodes, with an ability to spread and then serve huge files from many nodes and spindles at once. Unlike scale-out clustered NAS, which is designed for serving many files independently each to different clients at the same time (e.g. hosting home directories in a large enterprise or fully partitioned/shared big data blocks), fully parallel file systems are great for serving huge shared files to many inter-related processing nodes at once.

Big data solutions based on Apache Hadoop (with HDFS) are also designed around scale-out storage. But these essentially carve up data into distributed chunks. They are aimed at analytics that can be performed by isolated “mapped” jobs on each node’s assigned local data chunk. This batch style approach enables a commodity-hardware architecture because localized failures are simply reprocessed asynchronously before cluster-wide results are collected and “reduced” to an answer.

However, extreme apps, including many machine-learning and simulation algorithms, rely on high levels of inter-node communication and sharing globally accessed files. This synchronized cluster processing requires high parallel access throughput, low latency to shared data, and enterprise-class data protection and availability—far different characteristics than HDFS provides.


Industrialization of Extreme Performance


Robust supercomputer parallel file systems are emerging from academia and research and are ready to deploy in commercial enterprise data centers. There are now a number of commercialized Linux-centric parallel file systems based on open source Lustre (e.g. from DDN, Terascala, et.al.) for Linux-based cluster computing. And for IT enterprise adoption of extreme applications supporting multiple operating systems with enterprise data protection, we see GPFS (General Parallel File System from IBM) setting the gold standard.

Parallel file systems can be procured and deployed on many kinds of storage nodes, from homegrown clusters to complete appliances. For example, DDN has industrialized a number of parallel file systems to host extreme applications in the enterprise market. Their GRIDScaler solution integrates and leverages parallel file services on their specialized HPC-performing storage hardware. This kind of integrated “appliance” solution can provide a lower TCO for enterprises due to baked-in management, optimized performance, reduced complexity, and full system support.

Extremely Compelling


New data-intensive solutions are enabling the exploitation of huge amounts of data to extract new forms of knowledge and insight. These new extreme applications can ultimately create new revenue streams that could disrupt and change whole markets.

Big data analysis is one type of extreme application, but it is only the tip of the iceberg when it comes processing large amounts of new data in new ways. New applications that demand parallel file access, high throughput, low latency, and high availability are also on the rise, and more and more enterprises (and service providers) will be tasked to deploy and support them.

Luckily, IT can support these challenging extreme applications by leveraging the vendor trends in industrializing technologies like parallel file systems. Technical excuses are diminishing, and the competition is heating up—it is definitely time for all enterprises to move forward with their own extreme applications.

If you are in IT and haven’t been asked to support an extreme application yet, you should expect to very soon.

Extreme Enterprise Applications Drive Parallel File System Adoption

With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services. “Extreme” applications like massive voice and image processing or complex financial analysis modeling that can push storage systems to their limits. Examples of some high visibility solutions include large-scale image pattern recognition applications and financial risk management based on high-speed decision-making.

These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential.

Every day here at Taneja Group we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems—the kind that most enterprise data centers (or cloud providers) have racks and racks of—simply can’t handle the performance requirements.

There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency or high availability—maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution.

The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.

The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise data centers to support the exploitation of new extreme applications.

 High Performance Computing in the Enterprise Data Center 

Organizations are creating increasingly more data every day, and that data growth challenges storage infrastructure that is already creaking and groaning under existing loads. On top of that, we are starting to see mainstream enterprises roll-out exciting heavy-duty applications as they compete to extract value out of all that new data, creating new forms of storage system “stress.” In production, these extreme applications can require systems that perform more like high-performance computing (HPC) research projects than like traditional business operations or user productivity solutions.

These new applications include “big data” analytics, sensor and signals processing, machine learning, genomics, social media trending and behavior modeling. Many of these have evolved around capabilities originally developed in supercomputing environments, but are now being exploited in more mainstream commercial solutions. 

We have all heard about big data analytics and the commoditization of scale-out map-reduce style computing for data that can be processed in “embarrassingly parallel” ways, but there are now also extreme applications emerging that require high throughput shared data access. Examples of these include some especially interesting business opportunities in areas like image processing, video transcoding and financial risk analysis.

Finding Nemo on a Big Planet

A good extreme application example would be image pattern recognition at scale. Imagine the business opportunity in knowing where customers were located, what kind of buildings they lived in, how they related geographically to each other and/or how much energy they use. Some of the more prominent examples of image-based geographic applications we have heard about include prioritizing the marketing of green energy solutions, improving development and traffic planning, route optimization and retail/wholesale targeting.

For example, starting with detailed “overhead” imagery (of the kind you find on Google Maps' satellite view), it is now commercially possible to analyze that imagery computationally to identify buildings and estimate their shape, siting (facing), parking provisions, landscaping, envelope, roof construction and pitch, and construction details. That intelligence can be combined with publicly available data from utilities, records of assessments, occupancy, building permits and taxes, and then again with phone numbers, IP, mail and email addresses (and fanning out to any data those link to) in order to feed a “big data” analysis. At scale this entails processing hundreds of millions of imagery and data objects over multiple stages of high performance workflow.

A World of Devices Hungry for Content 

As another example, the demand and use cases for rapid transcoding of video are growing every day thanks to the exploding creation and consumption of media on mobile devices. In today’s world of Internet-connected devices, each piece of video that is created gets converted via “transcoding” into potentially 20 or more different formats for consumption.

Transcoding starts with the highest resolution files and is usually done in parallel on a distributed set of servers. Performance is often paramount ,as many video applications are related to sports or news and have a very short time window of value. Competitive commercial transcoding solutions require fast storage solutions optimized for both rapid reads and massive writes.

EMC ViPR Adds More Bite to Software-Defined Storage

At EMC World this week in Las Vegas, EMC just threw its weight behind the much hyped software-defined storage movement.

"Our customers today have data centers which are increasingly going software defined,” said Amitabh Srivastava, president of EMC’s newly created Advanced Software Division.

He said that the company has announced ViPR, which is said to provide the ability to manage storage infrastructure and the data residing within it. The ViPR controller uses the underlying storage infrastructure for traditional workloads, but can also provision ViPR Object Data Services by accessing them via Amazon S3 or Hadoop Distributed File System (HDFS) APIs.

Srivastava stressed that ViPR Object Data Services integrate with OpenStack and can be run against enterprise or commodity storage. And courtesy of the strong relationship with VMware, ViPR integrates tightly with VMware’s Software Defined Data Center.

“ViPR is lightweight and runs in a virtual application (vApp),” said Srivastava. “This is a strategic product for EMC and our Advanced Storage Division has been mainly built around it.”

This cloud application can abstract storage into one pool for a centralized point of management and control. The user can create virtual storage arrays and implement policies to automate storage.

For example, ViPR instances can run on 200 EMC Symmetrix DMXs, 50 VMax and 50 Isilon units, providing a mix of block and file storage. Srivastava said any file-based storage under ViPR is given object capability.

“The emerging storage environment is object based,” said Srivastava. “In the past, there was no way to convert file-based systems to object storage.”

Those with most interest in this capability are likely to be media companies with multiple providers giving them huge amounts of images and video. With ViPR in operation, the data doesn’t need to be moved from one system to the other as it can be centralized and managed.

“Object storage is great for millions of photos or files, and it is the preferred way for storing them,” said Srivastava.

This is only the first of the ViPR services and more are coming soon. Analytics (i.e., being able to analyze data without having to move it into an appliance) is on the immediate horizon.  

StorageQuest Launches All-Flash Archiving Appliance

While the data storage industry rushes to incorporate solid-state drive (SSD) technologies into their appliances and arrays, Ottawa-based StorageQuest is taking a slightly different tack. The company announced that its Flash Storage Appliance (FSA), a headless, iSCSI-compatible unit that uses a type of flash memory typically associated with digital cameras, is publicly available.

StorageQuest director of Product Development revealed in a statement that what sets FSA apart is its use of CompactFlash cards.

"This new product leverages the popularity, availability and price of industry standard Compact Flash media and transforms it into a portable archival and retrieval storage system. Its current application lends itself well to the Security and Intelligence communities looking for portable, automated long term archiving of evidence data," informed Lelieveld-Amiro.

When it comes to backup and archiving, traditional hard drives and tape still dominate. Yet flash storage has made some inroads in systems like Nimble Storage's CS-series of converged storage and backup arrays. StorageQuest gives the concept a different spin.

StorageQuest's "first and only device of its kind" features 16 CompactFlash slots and supports cards from 1 GB to 256 GB, for a total of up to 4 TB of total flash storage capacity. FSA supports Windows 7 Professional, Windows 2003 or Windows 2007 servers.

Sporting a compact, desktop or rackmount form factor, FSA is fronted by an LCD screen and keypad for basic alerting, configuration and management. Redundant power supplies are included with the rackmount version.

FSA comes bundled with StorageQuest Archive Manager (SAM) software. The option provides more sophisticated storage management and archiving capabilities.

SAM provides remote replication to the cloud, optical libraries or other CompactFlash systems, read and write caching and card tracking and cataloging. Administrators can use SAM to group multiple CompactFlash chips, expanding the amount of data capacity that is available to users and applications. The software presents flash storage as virtual drive letters and supports manual, drag-and-drop storage and archiving.

For CEO Marwan Zayed, FSA fits into the company's approach to safeguarding data. "Our mission is to provide our clients a variety of scalable hardware and software storage technology options, that fit their particular needs, for long term archiving," he said in press remarks.

StorageQuest Flash Storage Appliance is available now. Prices start at $7,995.

Say What? Top Five IT Quotes of the Week

"We want to obliterate passwords within a few years"

Paypal Chief Information Security Officer Michael Barrett (eSecurity Planet)

"Openness always wins."

Facebook's Frank Frankovsky launching the new Open Compute Networking project (Enterprise Networking Planet)

"When we asked about SDN, thirty-four percent said they were more likely to see Elvis, Bigfoot, or the Loch Ness Monster than an actual SDN deployment"

Inbar Lasser-Raab, senior director of marketing for enterprise networking at Cisco, (Enterprise Networking Planet).

"I'm not going to be able to add value by designing a new garment, but what I can do is make sure my private cloud can meet the demands of the business today"

Mike Leeper, Director of Global Technology at Columbia Sportswear (Server Watch)

"Scale changes everything"

Rajeev Nagar, group program manager for Windows core networking at Microsoft (Enterprise Networking Planet)

Interop Panel Tackles SDN

LAS VEGAS - A Broadcom chip guy, a Microsoft software guy, and a VMware virtualization guy walk onto a stage. What do you get?

That's what the capacity crowd at Interop found out in a keynote session on Wednesday. Martin Casado, chief architect for networking at VMware; Rajiv Ramaswami, executive vice president and general manager of the infrastructure and networking group at Broadcom; and Rajeev Nagar, group program manager for Windows core networking at Microsoft, took the stage to discuss and debate SDN's present and future.

One of the questions that the panel addressed was where SDN fits in and where it's needed. The panel agreed that it makes sense for large deployments.

"Scale changes everything," Microsoft's Nagar said. "When you're managing exabytes of data and you're provisioning thousands of networks a day, you run into interesting challenges."

While some see SDN as a new evolution that threatens existing networks, VMware's Martin Casado disagrees. Casado is a key figure in the SDN movement. His 2005 Stanford thesis led to his original build of OpenFlow and the Nox SDN controller. Casado went on to co-found Nicira, which was acquired by VMware for $1.2 billion in 2012. Enterprise Networking Planet interviewed Casado in April.

"There is a lot of talk about the threat of SDN, but there has been so much change over the last decade already. and we have some capabilities already," Casado said during the panel. "Its living among us already."

Casado added that SDN doesn't make electrons go faster. Rather, he stressed that the value proposition of doing networking in software derives from speed and agility.

Network-aware applications

Broadcom's Ramaswami takes a more network-centric viewpoint.

"SDN first is about exposing what you have in the network, then it's about what you can run on top," he commented.

That comment led to a discussion on how much awareness applications actually need of the network. 

Casado's viewpoint? The app doesn't necessarily need to know everything about the underlying network.

"I do think some interaction between apps and network is good, but the less the app has to know about the network, the better it is for everyone," he said.

Microsoft's Nagar disagreed somewhat, noting that Unified Communications applications, such as Microsoft Lync, really can benefit from network awareness.

Broadcom's Ramaswami also sees a need for network aware applications. If something goes wrong in terms of application delivery or performance, network awareness is key, he asserted.

"You need the visibility, as the network will always have an impact on performance," Ramaswami said. "So you have to have some awareness. You don't need to know how to provision every port, but you do need info."

SDN's impact on network admins

SDN might also affect the role of network architects. In Casado's view, the role of a network architect varies from organization to organization, without any particular pattern, when it comes to SDN, but Ramaswami and Nagar feel differently.

Ramaswami sees the role of network admins becoming blurred with server admins as the question of who controls what becomes less clear.

Nagar, meanwhile, sees an expansion of the role of the network architect as a result of SDN. "The sandbox within which the network admin plays is now bigger," he said.

What's Realer, SDN or the Loch Ness Monster?

LAS VEGAS - On a good day in this town, you're likely to see Elvis on many a street corner. Outside of Las Vegas, Elvis sightings aren't as likely, and neither are sightings of real Software Defined Networking (SDN) deployment, according to Cisco research.

Today at Interop, Cisco is unveiling a new global IT study that surveys the attitudes of 1,300 IT decision makers in 13 countries. The study provides some interesting insights. 

Attitudes toward SDN: Skepticism, interest 

While real-world deployment of SDN is the focus of many of the 300 exhibitors at Interop 2013, the IT pros Cisco surveyed demonstrated a healthy amount of skepticism. 

"When we asked about SDN, thirty-four percent said they were more likely to see Elvis, Bigfoot, or the Loch Ness Monster than an actual SDN deployment," Inbar Lasser-Raab, senior director of marketing for enterprise networking at Cisco, told Enterprise Networking Planet.


Lasser-Raab cautioned, however, that the Cisco study surveyed IT people asked about networking in general. She suspects that in user groups more specific to the data center, the results might have been different.

"In the data center, there is much more of a focus on SDN because of the virtualized nature of the infrastructure," Lasser-Raab said. "There is a lot of buzz and opportunity with SDN."

And despite skepticism toward current deployments, Cisco's respondents indicated clear interest in deploying SDN in the very near future. Seventy-one percent said that they are planning to deploy SDN this year, in fact.

So why are IT pros looking at SDN?

Well, it's not about Elvis's blue suede shoes. It's about business value. Thirty-three percent of respondents said that they were eyeing SDN for its automated provisioning capabilities. Thirty-three percent also said that cost savings are a key driver. Eighteen percent identified analytics for traffic engineering and sixteen percent named custom forwarding and applications as factors attracting them to SDN deployment.

"We are seeing a lot of interest in SDN in networking," Lasser-Raab said.

Internet of Things 

Cisco also asked about the Internet of Things, another key trend in the networking space. With the Internet of Things, everything lives on the network, enabling a truly connected world. 

According to the study, forty-two percent of respondents are as familiar with Einstein's Theory of Relativity as they are with the concept of the Internet of Things. Additionally, forty-eight percent see new business opportunities coming from the Internet of Things.

HP 2920 Ethernet Switches: Flexibility, Scalability, Value

As Ethernet switches rapidly approach commodity status, switch vendors strive for relevance in an ever-evolving landscape. Industry giant HP looks to maintain its market leadership with the introduction of the HP 2920 Switch Series.

The 2920 family combines flexibility with value while maintaining scalability for even the largest of enterprises. The HP 2920 Switch Series consists of four switches. The HP 2920-24G and 2920-24G-PoE+ (Power over Ethernet) Switches offer twenty-four 10/100/1000 ports; the HP 2920-48G and 2920-48G-PoE+ Switches provide forty-eight 10/100/1000 ports. Each switch has four dual-personality ports for 10/100/1000 or SFP connectivity. The 2920 switch series also supports up to four optional 10 Gigabit Ethernet (SFP+ and/or 10GBASE-T) ports, as well as a two-port stacking module.

OpenFlow and so much more


Hewlett-Packard provided me with sample 2920-48G-PoE and 2920-24G switches. I set the switches up in a stacked configuration to test configurability, scalability, VLAN, and other capabilities, as well as to explore the management options bundled with the switches.

HP 2920 Ethernet Switches: Flexibility, Scalability, Value


One of the first things I noticed was the quality of the switches' construction. With metal chassis and quality components, the units are built like tanks. No wonder HP offers a lifetime warranty on the units, with advance replacement and next-business-day delivery: the vendor doesn’t expect these switches to fail due to any manufacturing or design shortcomings.

Quality construction and a robust warranty are crucial elements for any switch, but the feature set is what determines a switch's usability. HP loaded the 2920s up with a vast array of capabilities, matching what any other vendor could offer and then some. The laundry list of features includes the expected, such as Quality of Service (QoS), IPv6 support, SNMP management, Layer 2 switching, and 802.1X. But it's the unexpected that brings interest to the 2920 family: OpenFlow support, integrated out-of-band management, and LLDP-MED discovery.

Those features and more elevate the 2920 family's importance. HP has set out to create a switch that performs well in multiple use cases, including those where scalability is crucial, such as in the modern data center.

For example, the 2920 switches come integrated with LLDP-MED. This extension to the link layer discovery protocol is designed to detect media endpoints, critical for sites using VoIP and VoIP-enabled Ethernet phones, videoconferencing equipment, and other communications devices. LLDP-MED also brings support for Enhanced 911 services to VoIP phones and devices. With this, administrators can create location databases and incorporate device location discovery.

Other capabilities centered on supporting the enterprise include high-speed failover, high port densities, and QoS for rich media communications. Software Defined Networking (SDN) support is another important element for enterprise networks. By incorporating OpenFlow into the 2920 family, HP allows infrastructure architects to create software defined networks that separate data paths from control paths.

The switches also support ring and chain stacking methodologies, which improve failover support by allowing member switches in the stack to continue operating, even with multiple failures. 2920 switches that feature PoE+ as part of the model designator support multiple allocation methods, such as automatic, IEEE 802.3at dynamic, LLDP-MED fine grain, IEEE 802.3af device class, or user-specified, allowing for more efficient management and consequent energy savings.

The 2920s incorporate multiple Layer 2 capabilities, such as VLAN support and tagging, GARP VLAN registration, Jumbo Packets, IEEE 802.1v protocol-based VLANs, and full spanning tree support for VLAN spans (RPVST+).

HP 2920 security and performance

In addition to all that, the HP 2920 switches also come with strong security capabilities. Native support for everything from 802.1x and ACLs to SSL makes the units viable for a broad range of use cases. Network managers will appreciate the monitoring and management capabilities incorporated into the devices, such as full digital optical monitoring for SFP+ and 1000BASE-T transceivers.

On the performance front, the units leverage HP’s ProVision ASIC architecture, touted as a low-latency, high-speed routing platform. What’s more, adaptive power controls reduce power consumption during low utilization. Energy-efficient Ethernet (EEE) support further cuts power consumption across the board.

While HP’s 2920 series of Ethernet switches may prove a vast improvement over previous generation switches, industry giants like Cisco Systems still offer stiff competition. With that in mind, HP commissioned a report from independent testing organization Tolly. The report offered several insights into the 2920s' superiority:

  • The HP 2920 switches provide up to fifty-eight percent faster throughput than the competitive Cisco Catalyst 2960-S in a two-member stack. 
  • HP 2920 switches offer up to 7.5x faster stack failover than the Cisco 2960-S and 140x faster stack failover than the Cisco 3750-X. 
  • HP 2920 switch buffers support up to 12.5x and 11.5x more frames in a microburst than the Cisco 2960-S and Cisco 3750-X switches, respectively. 
  • The 2920s reduce TCO by twenty-nine percent over Cisco 2960-S switches. 
  • HP 2920s deliver twenty-nine percent lower average latency than the 2960-S and forty percent lower average latency than the Cisco 3750-X.3
The HP 2920 Switch Series starts at $1,425.

IDC Puts a Number on the Networking Market

LAS VEGAS - At its annual Interop breakfast meeting today, analyst firm IDC revealed its forecast for the SDN market and enterprise networking in general.


Rohit Mehra, vice president of network infrastructure at IDC, said that his firm predicts the total enterprise networking market for 2013 to come in at $42.4 billion. The Layer 2-3 switch market represents the lion's share of that figure at 46.9 percent. The market for Layer 4-7 devices, including application delivery controllers, is growing, though Mehra is more impressed by the pace of WLAN growth. He noted that WLAN revenues grew by 28 percent in each of the last three years.


Moving forward, Mehra said that IDC predicts the enterprise networking market to exceed $50 billion, with growth across all categories, by 2017.

One of the most exciting categories to emerge in recent years is the Software Defined Networking (SDN) space. IDC forecasts that by 2016, the in-use SDN marketplace will generate $3.7 billion in revenue. That's the same figure that IDC forecast for the SDN market in December 2012. At IDC's Interop 2012 breakfast meeting earlier that year, the firm pegged the 2016 market at $2 Billion.

Mehra then detailed how some of the individual components of the SDN marketplace will stack up. IDC expects that SDN-related network infrastructure will represent 58 percent of revenues. The control layer piece will bring in 8.7 percent, while networking services and applications will account for 18 percent. The remainder of revenues will most likely come from professional services.

That said, Mehra admitted that the forecast is not complete, as IDC has not captured the impact on the silicon space. He also noted that overall, sizing the SDN market presents a challenge to analysts.

"We have been counting boxes for the last few decades," Mehra said. "We as an industry are at the instantiation of a new era, where revenues come from software."

Defining SDN

IDC analyst Brad Casemore explained that increased network scale and mobility drive the need for SDN, which can potentially enable a faster, simpler way to manage networks in an automated manner.

At its core, Casemore noted, SDN is about decoupling packet forwarding from the data plane. It's also about providing management abstraction, visibility, and programmable interfaces. Finally, from a business benefit perspective, SDN is about the network providing more responsiveness to the applications.

For IDC, therefore, SDN is not a product or an endpoint. It's an architectural approach.

SDN and commoditization 

While some have suggested that SDN will lead to network commoditization, IDC disagrees. Casemore noted that the supply chain for SDN is still being built out, and that takes time to gestate.

This week, Facebook-led Open Compute Project announced a networking switch effort that could enable a new era of white-box SDN switches. Casemore cautioned that it's still very early in the game.

Interop Video Exclusive: Don't Bet Against Ethernet

LAS VEGAS - In a town built on gambling, your safest bet this week is likely on Ethernet.


After 40 years, Ethernet has come to dominate network connectivity. This is in part thanks to John D'Ambrosia, a key figure in today's Ethernet world. Currently the chairman of the Ethernet Alliance, D'Ambrosia has done much to advance IEEE standards, in particular the 40 and 100 Gigabit Ethernet specifications. D'Ambrosia is also set to be confirmed as chair of the new IEEE group that will define 400 Gigabit Ethernet.

In an exclusive video interview at the Interop conference this week, D'Ambrosia showed off what makes Ethernet work and why it's not just a best-efforts technology anymore.

The IEEE defines standards. The Ethernet Alliance helps make the standards work.


"The mission of the Ethernet Alliance is to speed up the adoption of Ethernet technologies," D'Ambrosia said. "Speaking as someone that works in the IEEE, reality is that when I'm done, there is a 300-500 page document, and that's not the same thing as technology."

D'Ambrosia said it's very hard to show someone a standard and say it works, but that's precisely what the Ethernet Alliance does, as demonstrated at Interop.

That demonstration shows how copper and fiber interconnect solutions can all be used for Ethernet transport at 10 GbE and 40 GbE speeds.

"That's the exciting thing about Ethernet. We've got all of these different solutions available," D'Ambrosia said.

When the Ethernet Alliance started back in 2006, InfiniBand connectivity was in aggressive pursuit of the market. Standards introduced in 2006 and 2007 helped to narrow the latency gap between the two technologies.

"People have their religions, and I'm not going to slam others, I'm here to promote Ethernet," D'Ambrosia said. "What I'll always say is, don't bet against Ethernet."

Ethernet dominates because of its flexibility and adaptability. It's proven itself able to meet changing needs.

"If Ethernet doesn't solve a problem today and there is a market for it, we will solve it tomorrow," D'Ambrosia said.

Enterprise Networking Week in Review: SDN, Interop, BYOD, and Ethernet

It's been a big week for networking and a big week for Enterprise Networking Planet.

This week, the enterprise IT world descended on Las Vegas for the Interop IT conference and expo, which featured keynote speakers from tech heavyweights like Cisco, VMware, Facebook, and Microsoft; exhibitions from Brocade, Dell, Huawei, and Juniper, among hundreds of others; and enough news, analysis, discussion, and debate to keep an industry watcher busy until Interop 2014. It was a lot to take in, but our Sean Michael Kerner was on the ground and keeping pace every step of the way.

Enterprise Networking Week in Review: SDN, Interop, BYOD, and Ethernet
As Sean predicted last week, SDN dominated Interop 2013. Research from Cisco indicates significant interest in SDN. Analyst firm IDC's financial forecast for the near-future SDN market supports that conclusion. VMware's Martin Casado, Broadcom's Rajiv Ramaswami, and Microsoft's Rajeev Nagar got together for a roundtable keynote on SDN contexts, network awareness, and the future of network admins. 

But all that was just the macro discussion. Interop also provided a rich mine of vendor announcements. Juniper revealed its first-generation SDN controller, which will help bring enterprise networks "into the cloud age," according to Juniper marketing and business strategy VP Brad Brooks. Huawei, seemingly undaunted by U.S. government concerns, showed off a massive new switch and talked future plans for SDN and the U.S. enterprise market. And Frank Frankovsky of Facebook's Open Compute Project opened up about open source and the social networking giant's plans to get into enterprise networking hardware. Meanwhile, Broadcom unveiled a new generation of silicon for faster, more secure WiFi. Alcatel-Lucent debuted new Application-Fluent OmniSwitches.

Want video? We've got that, too. Mike Rydalch, principal technologist at WiFi vendor Xirrus, took Sean behind the scenes of the conference's BYOD WiFi network, while the Ethernet Alliance's John D'Ambrosia demonstrated the power of copper and fiber.

Speaking of Ethernet, this week we ran a review of HP's 2920 family of Ethernet switches. Our reviewer's verdict? Frank Ohlhorst approves.

These are the technologies that will enable the network of the future. That network, for many enterprises, will be BYOD. But is BYOD truly the best way to go? Early in the week, I spoke with cyber security expert and TopPatch CEO Chiranjeev Bordoloi about the dangers of Android malware on BYOD networks. Check out our interview for Chiranjeev's security recommendations.

Hello, by the way. We may not have met yet. I'm ENP's new editor, and while I haven't shown my face around these parts much yet, I plan to much more often in the future. Stick with me for the latest and greatest in networking news. Not every week can be Interop, but every week at ENP will be full of what matters most in the enterprise networking space. And, speaking of Interop, if you're still hungry for more from Vegas, stay tuned.

Have a great weekend. See you here next week!

Jude

Open Season on Proprietary Networks

pen source is everywhere these days, even in networking. In fact, reaping the full benefits of virtualized, or software-defined, architectures may require networking to embrace open source technology to an even greater degree than server or storage infrastructure. The distributed networks of the future will require extensive interoperability to maintain end-to-end connectivity, after all. In pursuit of that interoperability, many vendors are looking to open frameworks.

As I mentioned a few weeks ago, even Cisco Systems, which has quite a legacy hardware portfolio to protect, acknowledges the benefits of open formats like OpenFlow and OpenStack. Cisco is taking a relatively cautious approach to open networking, though. The vendor hopes to give its own platforms a competitive edge by offering broad integration with the open network community while simultaneously providing specialty ASICs that give value-added functionality to Cisco software running on Cisco hardware. 

Others—primarily those not beholden to any vendor's network hardware to begin with—are pursuing a purer open strategy. In this arena, one particularly surprising player has emerged: Facebook. With its Open Compute Project, Facebook hopes to move beyond social networking to networking networking by leveraging its customized server, storage, and power platforms as reference designs for the data center industry. The project's latest move seeks to stake a large claim to the enterprise infrastructure market. At Interop earlier this week, Open Compute's Frank Frankovsky announced plans to develop an open hardware device designed to accommodate various operating systems, in much the same way that personal computer hardware can accommodate various OS installs.

 At the same time, other networking firms are embracing open source to enhance their products' ability to reach across disparate architectures. Extreme Networks, for example, developed its Open Fabric Edge architecture with an eye toward helping campus networks accommodate new mobile and virtual platforms. The system provides a unified view of WLAN, UC, audio-video bridging (AVB), and physical security infrastructures. It uses programmable APIs and both OpenStack and OpenFlow for network customization.


Open platforms are also making their mark in wide area infrastructure. At Interop earlier this week, Avaya showed off a Shortest Path Bridging (SPB) system, which aims to improve performance and service delivery in disparate networks by placing provisioning functionality on the network edge. The system utilizes the Intermediate System to Intermediate System (IS-IS) protocol to build multipath fabric architectures between network nodes, cutting the costs and complexity usually associated with inter-node network topologies. The multi-vendor demo was staged primarily to prove the efficacy of SPB’s interoperability capabilities. Firms like Alcatel-Lucent, HP, and Spirent contributed various technologies and service platforms.

Without question, open networking is more easily accomplished in software than in hardware, especially if you hope to integrate it into legacy infrastructure. That alone will make open source important to future networking development. Keep in mind, however, that open systems do not inherently improve network performance or simplify operations. What open systems can do is help streamline network infrastructure and improve resource utilization to enable the kind of flexibility that the increasingly dynamic data universe demands.

In that vein, network managers should apply the same purchasing criteria to open source platforms as they do to standard equipment. Don't rush to deploy open architectures out of fear of arriving late to the party. The ultimate goal is not simply to build openness into enterprise infrastructure. It's to create cost-effective solutions that enhance end-user productivity.

How to Choose the Best Software for Your Enterprise Needs

The movement in networking towards virtualization, cloud platforms, and SDN is shaking up the enterprise software market and will continue to do so for some time. Change can be good. But how can buyers get ahead in our brave new software-defined world? Last month, PricewaterhouseCoopers (PwC) released a new report, Experience Radar 2013: Lessons from the U.S. Software Industry, which details findings about the enterprise software market from the customers' point of view. Patrick Pugh, PwC U.S. Software and Internet Leader, and Shaivali Shah, PwC Customer Experience Specialist and Experience Radar Solution Leader, spoke to me about how enterprise software buyers can maximize the value of their purchases and avoid common pitfalls.

How to find the best software values for your company

Size isn't everything. Neither the size of the price tag nor the size of the features list a vendor proffers should be your primary reason to buy, according to Pugh and Shah. Instead, they recommend taking a solutions-oriented approach. What problems do you need your new software to solve? What are the specific business goals you expect this software to help you achieve? Look for "vendors who think beyond features and can solve larger business problems," Pugh told me.

Vendors, he said, often attempt to sell to buyers based on "features that are their strengths but are not necessarily relevant to buyers' critical needs." Are the benefits a vendor offers, no matter how extensive, benefits that your enterprise requires? Should they take precedence over other unaddressed needs? "Finding a software vendor who gets your end goal, pinpoints which features to focus on, and can customize future solutions will bring you more value," Pugh said. Ideally, a vendor can "serve as a strategic partner to solve larger business issues."

Need another reason the length of a feature set shouldn't dictate your software buy? The amount of features offered may not correlate to actual software performance, and reliable performance is critical to enterprise operations. Unplanned outages and technical difficulties mean lost time, lost productivity, and, ultimately, lost earnings. "It's important," Shah said, "to find vendors who can consistently deliver reliable performance, even if that means trading off some other features."

Buying for your company's needs 

The size and growth stage of your company should also guide your purchasing decisions. Is your company in growth mode? Every change your company goes through will change the demands placed on your software, too. For organizations in transition, Pugh recommends finding "software vendors who can work closely with them during software transitions. Find a vendor who can meet your need for scale and support you through other process, people, and IT changes."

Large, established multinational enterprises, on the other hand, have fewer transitional concerns but more operational needs and should, according to Shah, look for "software that's fully customizable to work within their complex, mature environments." These environments need to be up and running at all times. One of the key value drivers in the large enterprise environment, therefore, is the need for 24/7 reliability. "One third of large multinational enterprises value upgraded issue resolution services that would identify potential causes of issues and resolve them even before a problem occurs," Shah said, since "every minute of lost uptime translates into thousands of dollars."

Implementation is key 

So you've chosen a vendor ready to provide the most crucial features for your organization's needs, established a strong working partnership with that vendor, and done everything possible to ensure maximum scalability or reliability. Now what? No matter what software you've chosen, make sure both your company and the vendor are prepared to deliver an optimal implementation. "Implementation," Shah told me, is what the enterprise software buyers PwC studied remember the most. Not surprising, since "implementation sets the stage for future performance." Get a good implementation in place, and success will be more likely to follow.

BlackBerry Brings BBM To Android, iOS

ATERLOO, ONTARIO--(Marketwired - May 14, 2013) - BlackBerry® (NASDAQ:BBRY)(TSX:BB) today announced plans to make its ground-breaking mobile social network, BlackBerry® Messenger (BBM™), available to iOS® and Android™ users this summer, with support planned for iOS6, and Android 4.0 (Ice Cream Sandwich) or higher, all subject to approval by the Apple App Store and Google Play.

BBM sets the standard for mobile instant messaging with a fast, reliable, engaging experience that includes delivered and read statuses, and personalized profiles and avatars. Upon release, BBM customers would be able to broaden their connections to include friends, family and colleagues on other mobile platforms. In the planned initial release, iOS and Android users would be able to experience the immediacy of BBM chats, including multi-person chats, as well as the ability to share photos and voice notes, and engage in BBM Groups, which allows BBM customers to create groups of up to 30 people.

"For BlackBerry, messaging and collaboration are inseparable from the mobile experience, and the time is definitely right for BBM to become a multi-platform mobile service. BBM has always been one of the most engaging services for BlackBerry customers, enabling them to easily connect while maintaining a valued level of personal privacy. We're excited to offer iOS and Android users the possibility to join the BBM community," said Andrew Bocking, Executive Vice President, Software Product Management and Ecosystem, at BlackBerry.

BBM is loved by customers for its "D" and "R" statuses, which show up in chats to let people know with certainty that their message has been delivered and read. It provides customers with a high level of control and privacy over who they add to their contact list and how they engage with them, as invites are two-way opt-in. iOS and Android users would be able to add their contacts through PIN, email, SMS or QR code scan, regardless of platform. Android users would also be able to connect using a compatible NFC-capable device. BBM has more than 60 million monthly active customers, with more than 51 million people using BBM an average of 90 minutes per day.

BBM customers collectively send and receive more than 10 billion messages each day, nearly twice as many messages per user per day as compared to other mobile messaging apps. Almost half of BBM messages are read within 20 seconds of being received; indicating how truly engaged BBM customers are. Today, BlackBerry also announced BBM Channels, a new social engagement platform within BBM that will allow customers to connect with the businesses, brands, celebrities and groups they are passionate about.


BlackBerry plans to add support for BBM Channels as well as voice and video chatting for iOS and Android later this year, subject to approval by the Apple App Store and Google Play. If approved by Apple and Google, the BBM app will be available as a free download in the Apple® App Store(SM) and Google Play store. Additional details about system requirements and availability will be announced closer to the launch.