Credential Holder Directories for Google Cloud Certified Professionals

17 Jul

Google recently started the Credential Holder Directory initiative for Google Cloud Certified Professionals. Google Cloud Certifications are around for a while, but Google didn’t have a centralized listing of certified professionals in the past. With the Credential Holder Directory initiative, Google gives an option for professionals to maintain their own personal directories with all relevant certifications.

At the time of this writing, there are about 2000 professionals listed in the Directory, covering handful of certifications offered by Google Cloud Platform. That number is going to grow within a short time. One good feature of this directory is that each professional can maintain a short profile and links to Twitter and LinkedIn, along with all Google Cloud Certifications the person holds.

Brave Browser

01 Jul

Started using Brave browser recently. I am quite impressed with the security features. Now a days it is my browser of choice for reading several news and entertainment sites. It limits the eye-catching distractions that are otherwise irksome.

I like the way shields work and I often control the security settings either at global level or for each page. Brave browser is now part of my standard install on desktop clients.

Linux Kernel Session at SRKR Engineering College

06 Mar

On Tuesday, 06th March, I delivered a session on Linux Kernel Architecture at CSE department of SRKR Engineering College, Bhimavaram. The session’s goal is to introduce various subsystems of Linux Kernel and how they work together to deliver a flexible and robust operating system. Here are a few pictures from the event.

Slides used in the linux kernel session are here in the classes page.

IPv6 on AWS

21 Feb

IPv6 is finally gaining some momentum, thanks to support from several public cloud vendors and data center players in the recent months. Beyond the infrastructure players, the slow migration of several ISPs and corporations towards IPv6 is evident in Google’s IPv6 traffic statistics:

Raju Alluri: IPv6 adoption over years, courtesy Google

Hovering around 13%, the adoption rate is impressive in the recent months and support from players like AWS is going to improve this metric over the next few months.

It is also interesting to see that India is reasonably ahead in this initiative, as seen in Google’s stats:

Raju Alluri: IPv6 adoption by country, courtesy Google

AWS published its support for IPv6 in EC2 around 1st December 2016 and by January this year, the support is extended to 15 of its global regions. During those weeks, the dual-stack support is also extended to ELBs (load balancing), Route 53 (DNS), public VIFs, S3, IoT and individual CloundFront distributions.

One key thing to note is that the IPv6 addresses assigned are internet-routable and it needs an Egress-only Internet Gateway if any if the VIFs don’t need to get exposed to internet and have to remain in a private network.

Simple yet strong steps towards IPv6 adoption!

IT, Android One and BYOD

Carrier-Vendor-Android-IT-Stack
11 Jun

BYOD (Bring Your Own Device) is now a paradigm that is tightly integrated into IT spectrum. IMO, Android One helps simplify the life of IT staff while handling user owned devices that operate on data that is owned by the organizations.

The IT staff’s ownership over the client devices/end points is reducing very fast in recent years. This is due to the use cases that focus on end users,  service providers, partners and internal employees that are continuously contributing to the data of an organization. Despite reducing level of ownership of these devices, the IT staff continue to have a responsibility to prove that they have adequate controls over these devices and their data.

For example, signatures of customers and delivery details on delivery personnel’s client devices should be ascertained with to all the integrity and confidentiality controls by IT staff of any shopping website and its delivery partners. There were times when the client devices are custom made solutions for the delivery companies, but smart phones are rapidly replacing these legacy client devices. More often than not, these smart phones are owned and updated by individuals rather than organizations. Hence these BYOD devices pose a challenge to the IT staff and increase threat to the data confidentiality and integrity.

The major challenge for IT staff is to ensure that all the nomadic client devices are running approved, stable and latest stack. In olden days (say about 10 years ago), the client devices are mostly laptops that need to be patched and upgraded regularly, along with appropriate user access controls on these devices. With the proliferation of smart phones as client devices, the challenge goes multi-fold. Wearing an IT Professional’s hat, I do see every smart phone like this:

Android One: Carrier-Ventor-Android-Stack

The moment I think about manageability of that smart phone (not ownership of the smart phone, which is never going to happen), I see the smart phone as

Android One: Carrier-Vendor-Android-IT-Stack

The IT stack in the above picture is a combination of various off-the-shelf and home grown applications, together with well tested configurations of these applications. More often than not, the IT stack applications and configurations heavily depend on the underlying Android Stack. That means it pays to support these applications and configurations on a limited set of latest versions of Android.

When it comes to the upgrades (read patching) of the Android stack, both the carrier and vendor have long release cycles in place for stack upgrades on target devices. As a result, most smart phones that are more than a year old end up running Android versions that are old and probably not patched fast enough. This is true with any mobile OS though, not just with Android.

Supporting the IT Stack in the above picture is a nightmare for IT staff if they are to support this on multiple and older versions of mobile operating systems. Due to this, the IT staff may want the mobile phones to run with the latest OS. But the large release cycles of phone vendors and carriers often become a hurdle to accomplish this.

Android One (https://www.android.com/one/) is the best solution out of that version control mess. I have been using a cost effective and reasonably powered Android One phone since 2014. Over the last year and half, this phone has become my device of choice for use cases that strictly require latest versions of Android Platform and its eco system. The use cases include IT tools like VPN connectivity apps, single sign-on solutions, device control/erase solutions, messaging solutions and sharing solutions. This $100 unlocked dual SIM phone is a very reasonable investment for accomplishing adherence to stringent IT policies.

Android One is supported by phones that are very high end (e.g. the Nexus series sold directly by Google) all the way to cost effective phones in emerging economies. In almost all cases, the phones come with unlocked versions, leaving a wider choice of carriers to customers.

Updates to my Android One smartphone have been regular and painless in the last year and half. The ability to grab the latest update of Android within a few hours makes Android One my preferred choice.

In any BYOD centered IT infrastructure, Android One is the best way to go for IT staff to enforce tighter IT policies on smart phones while ensuring that the user devices are running with latest version of the mobile stack. That in turn ensures that the IT stack on the smartphone is current and easy to manage.

Identity as the Perimeter

03 Sep

The perimeter of an enterprise has been its LAN and WAN for quite a number of years. The popularity of VPN based remote access did extend the definition of an enterprise’s perimeter to the remote presence of its employees, albeit for short bursts of time more often than not.

As trends like Cloud based services and BYOD emerged, enterprises have this daunting challenge of protecting their data. In the new age network, data gets hosted (e.g. public cloud services) and accessed (e.g. laptops and phones) on devices that are beyond the firewalls of an enterprise. Moreover, employees want more and more flexibility towards accessing data – at wherever they are and on whatever they carry.

RSA‘s Jason wrote this blog post in which he describes the (potentially outdated) strategy of one of the Information Security persons he met – take out access to anything that has a hint of risk. Jason identifies the problem as well as side effects of that approach.

Here are the key assumptions enterprises need to make regarding their data:

  • Data takes multiple forms: e.g. Email, documents, code, tools, configurations and employee personal data
  • Each form of data might need different levels of access in terms of confidentiality and integrity: e.g. read-only, read-write for owner, write-once, privileged read-only and limited access
  • Data gets hosted at multiple locations (often beyond the firewalls of the enterprise): e.g. E-mail service provider, private data centers, private clouds, shared public clouds
  • Data gets accessed from multiple locations (often beyond the firewalls of the enterprise): e.g. desktops, laptops, phones, and to take it a step forward, TVs and car infotainment systems capable of reading your email.

Centrify‘s Tom Kemp shares his thoughts on making identity as the new perimeter. Making identity as the new perimeter has potential to provide solutions to many of the challenges arising out of the assumptions we listed above for the enterprise.

  • Identity controlled by an enterprise can be made to control access to data that takes different forms.
  • Enterprises can use single sign on (SSO) solutions that go beyond two factor authentication to provide on-demand access to data using identity as the primary factor
  • SSO solutions make it easy for the enterprises to control identity driven access consistently across multiple service providers like public clouds, internal data centers, private clouds.
  • SSO solutions, combined with device remote access/control solutions make it easy for enterprises to control the life of data persisted on nomadic devices like phones. This helps when a device is no longer tied to the same identity.

There is lot of mindshare building around managing identity and making it as a primary factor in access management. As Jason observes in his article, identity should be managed well beyond making it a two factor authentication. Context should be clubbed with identity to make more meaningful decisions for giving access to privileged information. That requires wiring several identity management and analytics products together for dynamically determining access levels.

Google already does this for its own services. If you login from a unusual location, device and application, it has the ability to enforce additional steps in determining the identity. I am really impressed (but not at all surprised) by Google’s ability to take it to not just the location and device level, but also application level. For example, Google maintains analytics data about your favorite browser on your desktop for accessing drive and if you change it, it notifies (and often counters you with additional checks, depending on context) you about that change.

I take Google’s approach as an exemplary first step in driving the identity with augmented data around context. As identity management solutions evolve, enterprises can bank on independent and collaborating solutions that determine identity. The collaboration among these solutions would be around determining the context of the user and making decisions around whether the identity can be determined unambiguously within that context. As the definition of perimeter evolves to center more around identity, these emerging trends in identity management are both welcome and necessary.

 

 

 

Driverless Cars: Moral and Legal Considerations

09 Aug

Driverless cars are no longer a fantasy. Despite being far from general purpose use, this technology is evolving leaps and bounds, thanks to players like Google and Tesla making steady progress on this technology. As the technology evolves and enters into public life, several legal and moral issues are going to crop up.

The recent issue of Communications of the ACM carries a nice article describing the moral challenges of driverless cars. In this thought provoking article, the author brings up scenarios that bring up ethical and moral questions. To quote from the article,

However, should an unavoidable crash situation arise, a driverless car’s method of seeing and identifying potential objects or hazards is different and less precise than the human eye-brain connection, which likely will introduce moral dilemmas with respect to how an autonomous vehicle should react …

Driverless cars have potential to fare better than humans in 90% (or better) of the times. But the other small percentage of times usually bring in more ethical and legal dilemmas where humans would fare vast better than the technologies used in driverless cars. In these situations, human drivers are usually faced with multiple choices that vary in terms of amount of impact or destruction to property or humans. The senors and algorithms used in driverless cars (as they stand for the next few years) may have limitations in identifying the course that leads to least impact or least destruction. When the system operating a driverless car takes a non-optimum decision, there could be several legal and ethical ramifications.

As discussed in the above mentioned article, handing over control to a human driver in emergency situations is far from reality, given the response times needed by a disengaged human. Even the automation around a fully engaged driver’s action is being subjected to several legal questions around responsibility. For example, this article on WSJ discusses how Tesla’s autonomous car-passing feature intends to pass on the responsibility to the driver, by making it a driver initiated (e.g. turn on the signal) automation. Given that the same action of the driver in a car with and without these autonomous features results in drastically different ramifications, states like CA, NV and FL are mandating special registrations for drivers of autonomous vehicles. The registration is based on the level of autonomous features of the vehicle.

Beyond the responsibility question that touches the legal aspects, driverless cars technology needs to continually improve upon the ethical questions that come up during an emergency situation. For example, is it okay to crash the car in the next line to avoid a bicyclist who is jumping a pedestrian signal?

Then comes the integrity question around the autonomous features. What is the possibility of these features getting tampered or outdated? Is Tesla’s Over-the-air update going to be a typical standard for automakers across the globe?

In a nutshell, the legal aspects of driverless cars can be best handled by training the drivers for those specific features. However, the ethical aspects require more maturity of the technology. Add the complexity of changes in driving rules across multiple geographic regions (states, countries) and we are going to see a lot of technology evolution in this space.

Here are a few lingering thoughts that I have regarding driverless cars. I am more anxious to find the answers sooner.

  • What happens if the road sign standards change across borders? E.g. colors and sizes of signs across states, speed limits posted in miles vs. kms across countries. We may soon see a few settings on your dashboard to let the car know (or confirm) that you are driving in New Jersey or Maine or Canada.
  • Cars may be certified to run autonomously in certain areas only. Like “This car can use the autonomous features in CA and NV only, but not in AZ.”
  • Cars would be able to identify the speed limit on a signpost and ignore a similar looking sign on a billboard next to a freeway. Do they do it by improving their sensors or depend on a networked repository (say Google Maps) of speed limits in that area.
  • Visual congestion identification and taking alternate routes. Pretty simple given the current advances in maps technology.
  • In situations where disengaged drivers don’t have awareness of circumstances that led to an accident, cars may require legally acceptable sensor information logs. In other words, the cars would have scaled down versions of blackboxes like in aircrafts.
  • What if someone hacks the “car stack”? How does one get to know? Do we get to do a periodic (smog-check like) stack-check and certification? If this looks like a fantasy, please checkout the Tesla hack and fix a couple of days ago.

And here is an extreme one:

  • If it turns out that the damages caused in an accident by an autonomous car with a disengaged driver are much higher than the damages if an engaged driver were operating the car without autonomous features, what are the insurance ramifications? Would insurance companies track maturity levels of the autonomous features and charge accordingly for insurance?

I do live in interesting times.

Availability is a fundamental requirement of Security

11 Jun

When people talk about security, they often picture confidentiality and integrity in their mind. However, the role of availability is equally important while defining the security. In fact, the term security is defined as a combination of confidentiality, integrity and availability by major standards and certifications.

There is a quote on a lighter tone in security community: The most secure computer may be the one that is not connected to any network. But such systems hardly play any major role in providing meaningful services to customers and consumers. The goal of a security expert is to ensure that the system (and its services) are available to all the intended users, while preserving the confidentiality and integrity of the data, system and its services.

For an end user facing service (say, a shopping site or a cloud service) to operate as expected, it requires several internal or public facing infrastructure services to operate in tandem. A shopping site might require its DNS service (public), CDN service (public) payment exchange (public) and private cloud service (internal) to function properly for delivering its online services to end customers. As the comprehensiveness of online services increase, there are more and more micro-services, infrastructure services and housekeeping services that play a major role in determining the health and availability of the overarching (end user facing) service.

As big companies increasingly  outsource their IT infrastructure to cloud service vendors (DNS, mail, compute infrastructure, to name a few), they increasingly depend on availability of each of these infrastructure components. As cloud service providers mature their infrastructure services, they become more and more alluring to small enterprises and startup companies, given the lower entry cost and least effort to scale up. In a nutshell, the availability of services outside the perimeter of a company, irrespective of its size, becomes essential element in offering secure services to the employees and customers of that company. On a side node, the definition of the perimeter of a company is fast diluting with more and more cloud service providers offering infrastructure services.

Even for companies that internally host their infrastructure services, the availability of these services is the most critical component in providing secure services to their end customers or employees.

Lack of availability of contributing components severely impacts security of an online service. Lets take a look at a simple example. When an authentication and authorization component operates at lesser availability levels, users of that component (developers, IT admins) make amends to lessen the impact of non-availability. For example, they may want to cache a few things for a longer amount of time. That makes any online service that depends on that authentication and authorization mechanism more vulnerable than a service that operates on top of a highly available authentication and authorization service. As more and more amends are made to reduce the impact of availability of internal components, the online service gets more holes in its security.

Every developer and IT engineer should work towards providing hooks for availability metrics and augmenting them with actionable operating procedures when availability gets impacted. These hooks and procedures should be fine-tuned as time goes on and as new factors influence the availability.

Every security expert should look at availability of an online service and that of its internal components as fundamental requirement for ensuring security of such service. Ample bells and whistles (in the form of monitoring and management infrastructure) should be setup to catch availability issues within an online service’s eco-system. Trends related to lesser availability of a component and service need to be detected and acted up on.

 

 

authbind vs iptables on AWS

29 Jan

Here is a short description of the scenario I was working on. I am using a standard AWS AMI to run tomcat (tomcat7, to be specific.) The default configuration of AWS AMIs (and many other off-the-shelf unix based servers) is such that tomcat (or any other program that runs with a non-superuser credentials) can’t bind to privileged ports. However, tomcat needs to use these privileged ports (443 for TLS and 80 for standard HTTP) to serve public facing pages.

Making tomcat run as superuser is really a bad idea (the why question is beyond this article.) So there are a few tricks to make tomcat work on privileged ports.

authbind

There is lot of mindshare around authbind when it comes to hosted environments. The manpage of authbind describes how authbind can be used to make a program bind to sockets on privileged ports. However, if you are using a standard AWS AMI, you may have some challenges using authbind. Also, for automated environments (read Chef) in AWS, I felt authbind is more complicated to work with.

iptables

Port redirection using NAT features of iptables is very simple and straight forward. However, it requires an additional configuration on tomcat to use proxy mode on privileged ports.

Here is the NAT configuration using iptables.

sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8443
sudo service iptables save

Once this is done, all inbound traffic on 80 will be redirected to 8080. The same is the case with port pair 8443 and 443. This way, tomcat can still bind to port 8080 for HTTP and 8443 for TLS while serving incoming connections on 80 and 443 respectively.

When a client program queries the port information from tomcat, it should respond with port 80 and 443 instead of 8080 and 8443. To ensure that, one can use the proxy support feature of tomcat. Here is the additional configuration in tomcat connector settings in server.xml

<Connector port="8443" proxyPort="443" .../>
<Connector port="8080" proxyPort="80" .../>

Other Considerations

There are better ways to handle this port redirection when you have front-ending loadbalancers and/or proxy servers in place. Having proxy/loadbalancers solves helps mitigate more issues than just solving the redirection problems. However, the iptables approach is better than authbind approach when you are using a single server on AWS without lot of additional infrastructure and configurations in place.

Data Insurance: to Limelight and Mainstream

31 Dec

In contrast with other essential elements of human life like death and taxes, the history of insurance has been very short. However, in terms of evolution, the concept of insurance has been constantly changing and continuously embracing new domains. Insurance of properties, life, health, beauty, athletic talent and limbs are very trivial now. Data insurance, which has been limited once to multi-billion dollar corporates and that too for limited scenarios, is now taking center stage.

The drivers for data insurance existed for quite some time, but they haven’t proliferated into human life and organizational practices as it happens now. The key drivers pushing the trend towards data insurance are the protections we need against data loss, data compromise and data misuse.

Organizations, as they evolve in their presence over web, social networks and mobile applications, are capturing more and more data. The rest of the discussion in this article focuses on two categories of this data.

  • Acquired data: All the customer information, employee information and any other user information collected directly or indirectly from the users constitutes this acquired data. By nature, this class of data is highly likely to have sensitive information that includes personally identifiable information (PII), credit card information, etc.
  • Generated data: All the housekeeping, analytics and user behavior data in an organization falls into this category. This data is very vital in delivering  better user experience to both end users and internal teams. This data is mostly generated by an organization’s web/mobile applications that interface with end users and may be augmented with data inferred from other user interactions like support calls and email exchanges.

Any compromise on acquired data leads to a very big exposure – loss of face, legal tangles and/or customer loyalty issues. The data compromises detected at companies like Target and Home Depot are leading to customer unrest, loss of loyalty and severe financial implications from legal consequences.

Any compromise of generated data makes an organization limp (often heavily) in their business process. Generated data compromise mostly leads to inefficiencies and exposure of the secret sauces to competition.

The impact of a compromise on generated can’t be taken any lightly when compared to the impact of acquired data compromise. The generated data may also include intellectual property related items that could hurt a company in the long run when that data is compromised.

Digital (or digitized) data captured by humans also is increasing in its prominence,  value and the risk of compromise. Whether it is personal pictures of celebrities or tax data of individuals, the risk associated with any compromise of this data is increasing over time. As the data access avenues are increasing (e.g. health data accessed via a wearable device), the potential for compromise of personal data is also increasing.

Given all this increased focus on data and its risks, we see a bigger shift towards insuring the data by corporations and individuals. Data Insurance is taking new paths that are less traveled by insurance companies in the past. Data Insurance packages now contain and cover a wide variety of data sets.

Just like humans undergo a set of prerequisite tests before taking a new health insurance package, data sets might undergo certain audits that cover the access controls and security risks associated with this data. We may also see a trend towards re-audits during renewals of data insurance to re-validate the access controls and risks.

The key factor in Data Insurance is determining the value of data. Human life insurance packages usually cover sums like 5x annual income. Vehicle insurances usually cover up to the Bluebook value of a vehicle. Coming up with valuation for data is not that straight forward though. The valuation process might differ greatly between acquired data and generated data. Unlike constant depreciation of a vehicle’s Bluebook value, the value of data may either decrease (data that becomes stale over time) or increase (with volumes or with increased sensitivity of same data) over time. Data Insurance companies and the insured organizations/individuals will often be re-evaluating the value of data to optimize costs and minimize the impact of exposure.

In summary, here are some of the primary factors by which data insurance evolves:

  • Categorization of data
  • Valuation of data
  • Data audits

As data insurance hits mainstream, all these factors experience market growth and some sort of standardization beyond what we have today.