Every week the Opus team picks a news story or topic or idea that is relevant to the entrepreneurs and businesses we partner with.

RSS Feed

Archives

Look beyond “base-level” security to build a valuable container security company

Preeti Rathi - - 0 Comments

Much has been written about how containers are porous – that they are not as secure as virtual machines. Infact, a while ago, I penned a blog about how security is the weakest area of containers. What a difference a few months have made – Docker, CoreOS and quite a few other groups have been heads-down on tightening container security and have announced numerous initiatives to make containers secure.

Let’s begin with Docker. A few weeks ago, the company announced the acquisition of Unikernel Systems. The core idea with unikernels is to strip down the operating system to its bare minimum based on the requirements of an application so that it can run that very specific application. This elimination of unnecessary code results in multiple benefits –

  1. Reduces attack surface of the application
  2. Smaller memory footprint
  3. Faster boot time

2016-03-03_Container Image 1

 

 

 

 

 

 

 

 

(source: unikernels.org)

With the acquisition of Unikernel systems by Docker, unikernels will gain widespread adoption, effectively making unikernels just another type of container. Thus any application for which security and efficiency is crucial, they have a solution in unikernels.

Docker also recently announced support for seccomp (secure computing) & SELinux (security enhanced linux). Seccomp is a method that allows to control an application’s behavior by limiting which system calls are permitted by an application. Without seccomp, an application can run any system call supported by the kernel and the user can try any parameters they want until they find an exploit. The challenge that still remains though is for system administrators to figure out in advance, what system calls can be made by which applications.

SELinux on the other hand is a mechanism for supporting access control security policies.

A few months ago, Docker also announced DCT (Docker Content Trust). This initiative is focused on a different aspect of container security – validating the identity of the container’s publisher and its content. DCT works by using a two-key mechanism, ensuring that the publisher signs her work before making it available publicly and the users are able to verify this signature.

CoreOS, which has always strongly emphasized the importance of security, has taken a different approach to security with their announcement of DTC (distributed Trust computing). Using DTC, CoreOS says you can:

  • Validate and trust individual node and cluster integrity, even in potentially compromised or even hostile data center conditions
  • Verify system state before distributing app containers, data or secrets
  • Prevent attacks that involve modifying firmware, bootloader, the OS itself, or the deployment pipeline
  • Cryptographically verify, with an audit log, what containers have executed on the system

The boot process – from the very first step of powering on the system all the way to user login – is subjected to cryptographic validation and can execute only if it can validate with a public key, held in firmware.

Then, there are companies like Intel, that are focused on improving the security of containers via hardware modifications. The company announced a solution called “clear containers” that provides hardware hooks and software shortcuts to speed up containers while providing virtual machine security. Instead of staying “general purpose” and being able to emulate any OS, they chose kvmtool as their hypervisor and focused on optimizing the QEMU layer to offer a smaller footprint and faster boot times.

Finally, quite a few initiatives have been announced in recent times that focus on deliberately blurring the line between a lightweight, fast container engine and a VM that offers strong isolation, to offer the best of both.

Examples of these would be HyperD, Canonical’s LXD, the experimental Novm project, and Joyent’s Triton Elastic Container Host. The line will likely remain blurred, as companies experiment with combining the best of both worlds to provide products/projects that are focused on solving a range of requirements.

To give an example, here’s HyperD’s view of things –

2016-03-03_Container Image 2

 

 

 

 

 

 

 

(source: hyperd.org)

Containers are currently in the state of evolution, and which of these initiatives/products gain momentum remains to be seen, but what seems clear is that container security will be at par with VMs at some point in future. Thus, the base-level security of containers is table stakes and startups that are focused on the security aspect of containers should identify areas beyond the basic container security to build a valuable company.

Continue Reading ...


Security is the biggest unsolved problem for Containers

Preeti Rathi - - 0 Comments

Container-ImageContainer security is not a well-solved problem today. While containers can isolate several areas of the underlying host from the containerized application, it is well known that the isolation is not as robust as that offered by virtual machines. Today it is still quite unclear as to how secure containers really are. An example that underlines this is a cheatsheet from Container Solutions for “using Docker safely”.

The security documentation from Docker states, “Running containers (and applications) with Docker implies running the Docker daemon. This daemon currently requires root privileges.” As one can imagine, root privilege is all-powerful – and if used improperly, can cause a lot of havoc.

One of the highly touted benefits of running containers is that containers allow anywhere from two to six times more server instances on a given server than those possible on a virtual machine. This superior performance, however, is possible only when containers are run on bare metal (versus running containers on VMs). But if you check the container offerings from any of the public cloud providers today, you will find that no public cloud provider offers containers on bare metal. Even Google, which internally runs everything on containers and has been using the technology for a long time, does not offer containers on bare metal with GCE (Google Container Engine).  This is because multi-tenant container security is still a problem waiting to be solved.

So far, a big part of the startup frenzy around containers has been around container management (StackEngine, Tutum, Nirmata, Panamax), orchestration (Mesos, Kubernetes, Docker Swarm), continuous integration & testing (CloudBees, Shippable, CircleCI) and related deployment areas. The majority of the exhibitors at the Docker Conference 2015 in San Francisco were focused on offering deployment related solutions – providing further confirmation of the same. Given that Containers gained their fame & momentum as a technology for developers – deployment as the first area of focus makes sense.

However, the next stage of big growth for containers will come from enterprise adoption of the technology. In order for containers to become an enterprise grade technology, evolution/innovation in some key areas is a necessity. The primary areas that are still in early stages of evolution include container networking, handling of storage and security. Of the three, container security lags behind the most.

Different aspects of container security are in various stages of evolution – with preliminary solutions announced for a few of them to no resolution in sight for others.

Lets take Image security of containers for example – which at a high level is about ensuring that the image hasn’t been tampered with and vulnerability assessment of images. Some initial solutions for the same have recently become available in the market. Docker recently announced Docker Content Trust (DCT) that makes it possible to verify the publisher of Docker images. Furthermore, IBM’s Vulnerability Advisor gives container developers a view into their image security properties and provides guidance on how can images be improved to meet common sense best practices and upgrade to known industry fixes.

Similarly, quite a few companies have already announced support for another well-known area of container security – Container monitoring, which is about providing visibility into containers. These include NewRelic (Servers for Docker), Datadog, SysDig, Groundwork’s BoxSpy, Google’s cAdvisor, etc.

However, other security areas like preventing unauthorized access to containers need a robust solution to accelerate container deployment and usage in Enterprise. Startups like Scalock and a few other startups still in stealth mode are focused on providing such a solution.

Another security area that is still in early stages of evolution is, running containerized apps with different security profiles on the same host – which is considered too risky today. Solutions are needed not only for setting security policies for containerized apps but also to close any loopholes in the container configuration profile. Twistlock, an Israel based startup, provides the ability to configure security profiles for containers and also offers container monitoring capability.

However, for some areas like multi-tenant security – there is no solution in sight.

It has been said often enough that containers do not contain. But this creates an opportunity for entrepreneurs to innovate and generate huge value in the process.

Continue Reading ...


The Industrial Internet Consortium

Ajit Deshpande - - 0 Comments

The Internet of Things has been a hot discussion topic (as well as an emerging trend) over the past few years. As a term, it encompasses a broad range of sectors, including M2M, smart consumer end-points, wearable devices, sensor networks, machine learning, factory-automation, and so on. As devices and data continue to proliferate, interoperability across IoT devices, platforms and networks is becoming more and more a concern. Well, last week, a step was taken towards alleviating this concern, with the announcement of the Industrial Internet Consortium (IIC), formed by Cisco, IBM, GE, Intel and AT&T, with the objective of creating an interoperability standard around IoT.

There have been attempts made in the past around standardization, most notably the AllSeen alliance launched by Qualcomm and a number of startups to push Qualcomm’s AllJoyn as a standard, with somewhat of a consumer end-point focus. In comparison, IIC seems to be more focused on industrial use-cases. Together, these are two huge steps in providing structure to the infrastructure layer for IoT. At the same time, the one player conspicuous by its absence in these standardization efforts is Google with its Android OS. If, as expected, Android ends up becoming the OS around end-points and network controllers, then likely Google will hold all the aces around interoperability, potentially making IIC and AllSeen and others somewhat redundant.

So, we will see how things shape up. Indeed, the lack of interoperability within IoT has in the past been a bit of a deterrent for early stage venture investors (beyond the wearable device space, which has been able to raise good money on the back of successful crowd-funding campaigns), so hopefully IIC and AllJoyn and other initiatives help free up startups in this space to focus on what might eventually matter the most – data consolidation, data processing, and data analytics.

Continue Reading ...


Insights, from New Relic

Ajit Deshpande - - 0 Comments

Last week, application performance management (APM) startup New Relic announced the launch of New Relic Insights, a real-time analytics platform for web applications. Formerly code-named Rubicon, New Relic Insights consists of two parts: a set of analytics extensions/agents added to the enterprise application, and a cloud based service that ingests the data from the applications and allows detailed querying, analysis and reporting. New Relic’s agents extract data from the enterprise app and then store and present the information in real-time via its SQL-like query language. The entire service is a custom, closed source creation, and is not based on any existing open-source solutions such as Hadoop.

New Relic Insights helps users go beyond just crash-reporting and into more real-time data extraction and analytics, and is one piece of New Relic’s broader portfolio of planned products that includes APM (web, mobile and browser-based) and server performance monitoring amongst other things. In itself this looks like a pretty holistic approach geared towards monitoring performance across the hardware and software stack. At the same time, the interesting thing to note is that the use-cases (and correspondingly the buying decision makers) across these products are quite different from each other, spanning IT Operations, Web Development, Mobile Development and Business Analysis. In that sense, it will be interesting to see how successful New Relic will be in making its broad vision a reality. Indeed, New Relic Insights could be by itself already a challenging proposition, in terms of purposing a single platform for analytics and business intelligence across a variety of applications.

So, where might this go? Might this be just a posturing tactic by New Relic to scare off early-entrepreneurs in this domain, or is New Relic laying out the vision for what might eventually become a strategy driven by inorganic growth? Could this be a blueprint that mobile APM leaders such as Opus portfolio company Crittercism should also evaluate? We will see, but for now, it is at the least interesting that in a world that on the face of it seems to be going all-in into open-source, New Relic is joining Splunk and a few other analytics players in building a closed-source, custom analytics solution.

Continue Reading ...


Embedded images, from Getty

Ajit Deshpande - - 0 Comments

Last week, image licensing giant Getty Images announced that it was opening up more than 35 million of its images to be embedded into online content platforms such as WordPress, Twitter and Tumblr. The embedded images on their part would attribute to Getty Images as the owner of the content, as well as link to the Getty content repository for purchases.

Getty and Corbis are the two dominant image repositories today. Both of them have similar business models: buy stock photograph rights from photographers, license these images out to online content creators for a fee, and enforce their ownership rights via legal channels as much as possible. Last week’s announcement has been touted by many as similar to the transformation that online music went through, wherein pirated music sharing was replaced by low-cost purchases on iTunes. In terms of legality, this seems to be a step in the right direction for Getty, because it can now utilize image embedding for two purposes: brand recognition, and advertising. Earlier, the only input that Getty had in pricing its images was buyer demand in terms of number of image purchases, but the company had no visibility into the number of eye-balls an image received, which in turn was driven by the caliber of the web-content that the image was used within. Now, with advertising and brand visibility coming from image embedding, Getty’s revenue can likely scale with the number of image impressions, which makes this new model far more effective for Getty.

But then, what about the photographer? Could he/she also be somehow enabled to make money proportionately to the number of image impressions? Could a pricing engine be built that gives the photographer a choice between selling image rights upfront (for the right amount), or receiving image royalties based on impressions, or some hybrid of these two approaches? Could someone like Getty offer its photographers such a versatile and predictive pricing platform? Now *that* would be a true marketplace strengthening innovation!

Continue Reading ...


Atlassian builds its marketplace

Ajit Deshpande - - 0 Comments

For more than a decade now, Sydney, Australia based company Atlassian has been offering tools that are widely used by software developers and project managers. The company offers products such as issue tracking application Jira, hosted code collaboration solution Bitbucket, and Git repository management platform Stash among others, which have together helped the company get to more than $150 million in annual revenues. Over the years, third party developers have built add-ons to Atlassian’s on-premise offerings to help further meet developer needs; these have been offered via the company’s online marketplace over the past year or so. Now, as development moves more and more into the cloud, Atlassian has continued to make its own transition as well, and last week the company announced it was opening up its marketplace to third party add-ons for its cloud products as well.

Issue tracking and version control have seen a lot of advancements over the past decade, partly driven by Atlassian and competitor Github, each of which is currently valued at more than a billion dollars. As competition between the two players continues to heat up, the key aspect here will likely be which company gains greater mindshare and usage amongst developers. In this context, Atlassian’s recent launch of Git Essentials (a product suite that combines Jira, Stash and other tools into a single integrated offering for enterprises) as well as last week’s marketplace-related announcement will help the company further engage its developer community. Github on the other hand has taken a slightly different approach; instead of building an affiliate marketplace program, it offers other avenues such as job postings to bring developers to its site. With both Github and Atlassian continuing to take such steps towards developing their ecosystem within this humongous software development market, the great news is that likely there will be three winners here: Atlassian, Github and finally the software developer!

Continue Reading ...


Flickr Turns 10!

Ajit Deshpande - - 0 Comments

Photo-sharing site Flickr last week celebrated its 10th anniversary. Acquired by Yahoo in 2005 for 35 million, Flickr currently has 92 million users across 63 countries that between them share a million photos per day. More than 6 billion photos had been uploaded to Flickr by mid-2011, indicating that likely approximately 7 billion photos have been uploaded so far.

The past decade has seen significant evolution in this space, from image repositories to private image sharing groups to mobile social sharing communities. Facebook, Instagram, Picasa, Flickr, Photobucket and Shutterfly, as well as many other upstarts, have all been jostling for consumer attention over this timeframe. A lot has been said about Yahoo’s role in Flickr’s evolution over this decade. On the whole, Flickr has done reasonably well, yet the product has been clearly negatively impacted by the social and mobile wave. It took just more than a year (between 2010 and 2011) for a billion images to be uploaded to Flickr, whereas their current upload runrate of 1 million a day is far slower. Further, Flickr is already dwarfed by two-year old Instagram’s 150 million users and 16 billion uploads.

So, here we are today: mobile device adoption resulting in exponential growth in image data, storage constraints creating silos of content across multiple devices for the user, and multiple social-sharing communities causing further fragmentation of consumer-owned content. Now that the key platforms have emerged, the need of the hour might be for an aggregator to bring all these together: one that might be able to sync images and track friend-lists across a user’s various image platforms, one that could potentially consolidate storage capacities across these platforms, one that could help a user seamlessly share albums and images across various public or private lists. Efforts are underway on this front, at Opus portfolio company Eye-Fi , at Lyve (Seagate funded startup set up by Apple alums) and at other firms. We have now moved on from physical prints to digital memories, so here’s hoping that we are able to manage these digital memories and revisit them over coming decades.

Continue Reading ...


Isis and the banks!

Ajit Deshpande - - 0 Comments

Contactless mobile payment technology has been around for almost a decade now, but has not yet become main-stream in the United States. Isis, the joint mobile wallet initiative undertaken by AT&T, Verizon and T-Mobile, has been touted as a key player in driving technology and consumer adoption in this space. Last week, the company announced that three large banks, Wells Fargo, Chase and American Express, as well as a number of key merchants including Jamba Juice, Coca Cola and Toys’R’Us had partnered with it to offer cash and in-kind incentives of up to $300 over the next three months for each consumer that used the Isis Mobile Wallet.

Contactless mobile payment makes logical sense for the long run, since it leverages the pervasiveness of mobile devices to enable online tracking and management of financial transactions. However, the technology has seen adoption challenges on two main fronts. First, deployment of Near Field Communication (NFC) compatible Point-of-Sale Systems needs merchants to commit to capital expenditures and PoS integration. Second, getting consumers to adopt and start using mobile wallets is a significant challenge around user experience, credentials management etc. With last week’s announcements, Isis is attempting to address both of the above challenges to some extent, thus providing some much needed momentum to contactless payments. Carriers have much to gain from this, since they control the mobile handset, and so would much rather have the mobile phone become the hub for all transactions. For merchants and banks too, the benefits are significant early on – high-end early-adopter consumers for the merchants, and ‘top of the wallet’ spots for participating bank credit cards.

Significant headwinds still remain in this space. Large corporate entities such as Google and Sprint, as well as startups including Opus portfolio company Sequent continue to chip in to spur market adoption, yet NFC continues to have its challenges. Apple continues to be the most visible holdout on the device front, and NFC compatible PoS system deployment continues to be slow. Can Isis and its banking and merchant partners create enough of a positive feedback loop with these near term offers to successfully lure and retain consumers? Consequently, could 2014 eventually become the ‘year of NFC’? All that remains to be seen…

Continue Reading ...


OpenStreetMap gets a fillip

Ajit Deshpande - - 0 Comments

Last week, mobile GPS-based mapping player Telenav announced that it had purchased OpenStreetMap (OSM) based mapping app developer Skobbler for approximately $24 million. Based in Germany, Skobbler is the top-rated OSM-based navigation app at this time. This move by Telenav unites Skobbler’s strong team with OSM founder Steve Coast who joined Telenav from Microsoft in 2013.  OSM itself is a 10 year old project that currently more than 1.5 million individuals contribute towards, to continually build and enhance an editable global map. Recently, it has been adopted by apps such as Foursquare.

As of today, mapping data comes from four key sources: Google, TomTom, Navteq (part of Nokia), and OSM. Navigation apps use this data across three broad categories – web and mobile mapping which sees players such as Google Maps, Waze, Telenav (Scout mobile app which uses Navteq) and Apple Maps (based on TomTom); automotive mapping which is dominated by Navteq, Garmin (based on Navteq) and TomTom; and the consumer GPS device segment where Garmin dominates. Of the mapping data providers, clearly, the only one with any chance at competing with Google is OSM and its crowd-sourced, open-source approach. Now, with Steve Coast and the Skobbler team on board, Telenav becomes the first major GPS mapping player to formally ‘get behind the crowd’. This should likely cause some introspection at Google, and even more so at Apple and Nokia.

While Skobbler clearly wasn’t a big exit, it does further prove the validity of a crowd-sourcing focused business model, following in the footsteps of Waze. As for Telenav, in theory it now has the resources to build an OSM-based competitor to Waze. Could the Telenav leverage these resources to obtain a billion dollar outcome for itself? Now that depends on execution (and potential acquirer Apple as well).

Continue Reading ...


Another billion dollar acquisition for VMware

Ajit Deshpande - - 0 Comments

Mobile device management (MDM) had its first billion dollar exit last week, when one of the leading startups in the space, Airwatch, was acquired by VMware for more than $1.5 billion. An eleven year old Atlanta-based company, Airwatch has more than 10,000 customers and between $125 and $150 million in revenues.

From a technology trends standpoint, there might be some headwinds around the MDM space. The first question is whether mobile can become a decent end-user compute platform, and the second question is whether mobile data or application management, wherein the focus is more around managing and securing data and apps rather than managing the end-point device, is the better approach. Both of the above will impact the long term need and growth prospects for MDM. However, all things considered, Airwatch still looks like an excellent fit for VMware.  VMware has recently been focused on mobile virtualization, which is likely complementary to Airwatch’s MDM suite. Airwatch also likely brings key new customer relationships to VMware, while at the same time helping VMware ward off competitor Citrix (which purchased smaller MDM player Zenprise in late 2012). As VMware goes up against Amazon and Microsoft and Google on the server virtualization front, this mobility play opens up another flank for the company. Most importantly, this acquisition strengthens the EMC family significantly. Pivotal already has strong capabilities around mobile data architectures (via the Pivotal Labs acquisition) and mobile app development (via the Extreme Labs acquisition). It will be interesting to see if components of Airwatch’s technology make it to Pivotal to help Pivotal become a full stack mobility services provider to the industrial internet.

Continue Reading ...