libressl

04 Nov

Libressl (http://www.libressl.org/) is a recent fork of OpenSSL. The goal of libressl is to provide a more secure alternative to openssl and the developers who forked the code feel that openssl is beyond repair at this point. Quoting from libressl website,

LibreSSL is a version of the TLS/crypto stack forked from OpenSSL in 2014, with goals of modernizing the codebase, improving security, and applying best practice development processes.

The best documentation of libressl features (or default configurations) can be found in the release notes from 5.6 version of OpenBSD. Looking at the list, this is an impressive push towards securing the implementation by default. Without worrying too much about the backward compatibility, some of the lesser secure configurations and protocols are simply left out from the implementation.

By dropping support for a bunch of hardware engines and platforms, libressl probably has less things to worry about. For example, dropping support for big-endian i386 and amd64 systems liberates it a bit. With classic adopters of big-endian architectures evenutally becoming bi-endian, there is not much to lose here, in my opinion. However, reusing the standard C library routines like malloc() and snprintf() could take an interesting turn. Dropping kerberos support is interesting too – don’t we still have a lot of academic community working on it?

I like changes like dropping SSLv2 support and stopping the use of current time as random seed among a few others.

There are several discussions in the past on which of these opensource SSL implementations are better. Being a legacy implementation, OpenSSL at this time requires a considerable set of configurations to make it secure. From that view point, libressl might look better in terms of its out of the box readiness for a more secure implementation. However, in the world of automated deployments and continuous integrations, recipes exist to configure openssl to avoid less secure protocols and algorithms.

I am not sure at this point whether libressl will surpass openssl in future in terms of adoption, but sure I am glad to see a drive towards being “more secure by default.”

 

Shellshock bug and the risks

26 Sep

Bash, the quarter century old shell utility on almost all popular unix based systems, is found to be vulnerable. The exploit works by injecting specially crafted values into an environment variable and using it to invoke a shell command. Once the exploit gets to that level, there is hardly any limit on what can be executed as part of the shell command.

The problem gets worse for the fact that many of the day to day usages of the network facing services have potential to use bash internally. For example, CGI scripts on web servers, convenience utilities offered by network routers and any other limited command execution tools might be the key vulnerability public and guest access private networks. Mitre warns that sshd with ForceCommand is a potential attack vector.

The bug is being termed as Shellshock bug or bash bug. RedHat’s security blog article is one of the earliest articles that discussed the Shellshock bug in detail. Robert Graham of Errata Security is the best known tracker of the issue and has ongoing observations and comments on his blog/twitter account.

Here is how you can check if the current bash is vulnerable on your system. If it prints vulnerable on the first line, then patch your bash package.

$ env x='() { :;}; echo vulnerable' bash -c "echo test completed"
 vulnerable
 test completed

For web servers, here is the test suggested:

$ curl -i -X HEAD "http://sometestdomainhere.com/" -A '() { :;}; echo "Warning: Server Vulnerable"'

The output looks somewhat like the following listing. If it contains “Warning” text, then it is highly likely that the web server’s bash is (and cgi’s based on bash are) vulnerable. This test doesn’t assure that the system is not vulnerable. You may still have other CGIs run with bash that are vulnerable.

HTTP/1.1 200 OK
Date: Fri, 26 Sep 2014 02:51:52 GMT
Server: Apache
X-Powered-By: PHP/5.4.32
X-Pingback: http://sometestdomainhere.com/xmlrpc.php
Link: <http://sometestdomainhere.com/?p=14>; rel=shortlink
Content-Type: text/html; charset=UTF-8

Since the Shellshock bug existed for quite a while, all versions of bash that are currently out there in active usage are likely to be vulnerable. Patching some of these devices might be trivial, but there still might be several other devices that are hard to patch.

  • Servers that run services like web/ftp might be vulnerable if the CGI scripts end up using bash. Invoking bash from PHP code is considered not vulnerable, unless there are ways to circumvent input parameter validations of the PHP code. The RedHat article mentioned above has links to instructions on how to fix this on RedHat variants of linux. For Ubuntu, this is a good thread to follow.
  • Desktops that use network facing services like DHCP over wireless and sshd are vulnerable as long as these services internally use bash commands or bash as the shell for the session. There are still discussions on whether Mac OS X DHCP is vulnerable or not, because Apple modified its DHCP and claims that the DHCP utilities don’t use bash internally. Mac OS X branched version 3 of bash and does its own updates to the shell. There are instructions on how to patch OS X, tailored more for unix admins (and requires xcode) than normal users.
  • There are some suggestions on renaming bash to a different name, but that might break more things than fixing them. Use this technique with utmost caution.
  •  Beyond Desktops and Servers, devices like internet routers may have vulnerabilities due to utilities and services they offer. For these devices, waiting for vendor released patches is the best option, but explore the possibility of turning off these convenience utilities.

Errata Security also has notes on wormable nature of the Shellshock bug. So patch your bash package as early as you can.

Upcoming AWS / EC2 instance reboot

25 Sep

If you are using AWS and EC2 instances, a reboot of most those instances is on the horizon. Amazon’s AWS informed of this reboot that is scheduled between 02:00 GMT on September 26th and 23:59 GMT on September 30th.

Read more about this reboot on Gigaom and Rightscale. Technical Forums on AWS and other sites are already buzzing with lot of traffic, discussing the potential impact and how to ensure that the services are not impacted.

Given the urgency and magnitude of the instances that are impacted, it looks like the patch is going potentially going to address a security vulnerability. The actual details of the patch and the issues that are fixed by it will be known around October 01st.

Summarizing various discussions on related forums, here is a quick summary of what to watch out for during this AWS / EC2 instance reboot

  • The reboot is not limited to any single availability zone. It spawns across all the availability zones
  • Good news is that the EC2 instances on all availability zones are not rebooted at the same time. So if your instances spawn across multiple availability zones, you are on a relatively safer side.
  • The reboot does not impact instances of the type T1, T2, M2, R3, and HS1. However, if the patch fixes issues on these instance types too, then you might be on your own. We will know more around October 1st.

Here are a few quick checks for those who are getting impacted.

  • Check your mailbox for a notice from AWS and it is likely to give more details about the reboots, impact and schedules
  • Ensure that the key services on your instances are configured for auto restart when the system boots up. It looks silly, but I have seen code that takes good care of newly spawned instances but doesn’t address reboots that well.
  • Ensure that your network paths (non-Elastic IPs, Route 53 entries, S3 buckets) survive reboot of the instances.
  • For those whose instances are NOT rebooted by AWS, watch out for the issues fixed by AWS during this reboot and evaluate their impact on your instances. Take corrective measures as soon as possible.

For those who can afford to be heroic enough – why wait till AWS reboots your instances? Reboot these on your own in each availability zone and test the resilience.

Email Transit Security Needs Better Adoption

05 Aug

Email transit security is not a new concept, but it deserves more attention in terms of adoption and practice.

Email has become the key component for information access – every online service identifies you through your email id. All online transactions (not just financial transactions) have one or more transactional email sent to you. Examples of transactional emails are – file share notifications, password reset mails, shipment notifications and account information change notifications. Despite not having direct financial information, all these mails have potential to compromise the security of an individual or company’s information.

We all take ample care while accessing our emails over a secure connection using tools like Thunderbird, Outlook or web based secure access. These secure connections ensure that  email is accessed securely from a mail server to a client device like desktop or phone. However, what is the assurance that the mail actually traveled from the sender to the mail server in a secure way?

Securing email during transit is not a new concept. There are enough protocols and processes in place for ensuring email security during transit. However, email security during transit isn’t adopted by all major service providers and organizational senders. This poses risk to the information carried over by emails to individuals and organizations.

Google’s safer email campaign and email transparency report focus on documenting metrics and best practices related to email transit security.  A couple of pictures on this page describe how TLS helps ensure security of email in transit.

Adoption of TLS for email transit security is not a unilateral fix by one or more ISPs. When email is hopping between two ISPs, it requires both the ISPs to agree upon the use of TLS for transmitting the email. So none of the ISPs or individual organizations can claim that they send/receive all their emails over a secure channel. At the time of writing this article, only 74% of mails from Google are accepted by recipients over secure connection. That number is much better, when compared to the 54% mails received by google from other ISPs over secure connection.

There are several techniques employed by eavesdroppers to make meaningful information out of even non-confidential content.  Ensuring email transit security helps an organization in the long run. Even if security of mail content is not of prime concern for an organization today, it is highly recommended that the email is sent securely during transit. That way, the organization is not giving away information easily to the eavesdroppers.

To trust, or not to trust

20 Jul

Do you trust? That is a vague question. It is very difficult to reply to this question with a definitive answer.

Do you trust [something]? That is a better question. Most likely, one will be comfortable to give a specific answer.

Do you trust [someone]? Here comes the complexity. The process of determining to trust someone has sevaral factors to weigh in. Trusting a person almost always requires an action or objective to qualify with, so that the affirmation of trust can be deduced. Trusting a non-human also requires additional qualification, but with a non-human, the purpose is mostly inherent and intended.

When I read this article on New York Times about the evolution of trust, it raised a few questions about trust and how we model in in computing and compute driven social environments. The computability of (or quantifying) trust and deduction of trust is, in general, an ever evolving and complex problem.

For non-humans we encounter in digital world, the establishment of trust is thru hashes, digital certificates, digital signatures, etc. and are continuously solved by entities like EMC’s RSA. For example, we may readily trust a PGP signed email or a shopping site that is protected by an SSL Certificate.

Trusting humans we meet online is nothing new. Social networking sites took the concept of acquaintance to a new dimension. Social Networking is slowly morphing the concept of acquaintance to a basis of trust establishment. For example, how many Facebook applications did you install (and trust) recently? How many people (those you have never met in the real world) did you befriend online and as a result, trusted them with your contacts, some amount of personal details, pictures, etc.? More often than not, social networking thrives on establishing trust beyond the immediate circle of acquaintance. Alluring to trust a friend of a friend  is the concept on which social networking thrives.

Shopping sites like EBay and Amazon have rating systems both at the product level and at seller level. Most of these ratings are based on previous transactions and respective human responses. The ratings quantify the transaction and response information to form a basis for trust. Customers make shopping transactions that are heavily influenced by these seller and product ratings.

Those two needs of trusting humans, for digital information and shopping transactions, have certain level of intrusion into the personal space. But the intrusive nature of services like Airbnb into one’s personal space is more physical and prominent.  The concept of giving someone (you possibly don’t know) access to physical resources has a considerable mental barrier. Service providers in this space will continuously try to lower that barrier or find ways to pass that barrier with quantified computation of trust. New dimensions of trust establishment are likely to emerge for solving this need. This space is likely to go beyond simple rating systems by the service providers.

 

 

More from openssl last week

09 Jun

The heartbleed bug might have created pressure schedules for many a system administrators and security practitioners around the globe, but it definitely has done a few good things. One great outcome of heartbleed is closer scrutiny of the openssl code and use cases, that is going to help the secure online activities in the long run.

Last week, openssl released a few more patches and people jumped on it right away. The issues involved are not as serious as heartbleed (actually, no where closer), but the attention these patches have got is good.

Broadly, there are two major vulnerabilities that are of interest to me from that set.

  • SSL/TLS MITM vulnerability (CVE-2014-0224): The vulnerability requires both client and server to be running vulnerable versions of openssl, so this was relatively easy to fix. This vulnerability exploits the weakness in ChangeCipherSpec phase of the SSL handshake and that is a small, but practical window of opportunity for the attacker. Also, connectionless services are impacted to a greater level (say, streaming) than connection oriented services. That made this particular vulnerability a very important one to fix, but not a super critical one from a timeline standpoint.
  • SSL_MODE_RELEASE_BUFFERS NULL pointer dereference (CVE-2014-0198): This vulnerability would cause potential injection into a stream and would lead to DOS attacks. Luckily, none of the key sites I work with are using this flag explicitly set on apache and nginx based servers.

Rest of the vulnerabilities are not so critical for the kind of environments I work on. However, patching in either of the above cases would lead to well patched servers for all these vulnerabilities.

So it was a good week/weekend that involved verify and patch than rush and fix.

 

DKIM, RFCs and Interpretations

28 May

I have been working on DKIM for the last couple of years and the journey has been very interesting.  Started with simple DKIM signing of mails and eventually spent considerable time on Signer Actions and Verifier Actions best practices. The later part is, to put it in a nutshell, an amazing experience.

If you live on the leading edge of any Internet technology that needs multi-vendor support and is cross-platform, you would know how critical the RFCs (http://www.ietf.org/rfc.html) turn out to be. As RFCs (and the respective consumers) evolve, the room for interpretation of the constructs in RFCs increase and you would see a lot of discussion around these interpretations.

Here is an example related to DKIM. For quite some time, all the DKIM signing entities are used to publishing public key records in the following format

“k=<proto>\; p=<pubkey>”

The above format is minimal and functional format for quite some time. For example, this is what gmail uses for this particular selector 20120113:

$ dig +short @ns1.google.com 20120113._domainkey.gmail.com txt
“k=rsa\; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1Kd87/UeJjenpabgbFwh+eBCsSTrqmwIYYvywlbhbqoo2DymndFkbjOVIPIldNs/m40KF+yzMn1skyoxcTUGCQs8g3FgD2Ap3ZB5DekAo5wMmk4wimDO+U8QzI3SD0” “7y2+07wlNWwIt8svnxgdxGkVbbhzY8i+RQ9DpSVpPbF7ykQxtKXkv/ahW3KjViiAH+ghvvIhkx4xYSIc9oSwVmAl5OctMEeWUwg8Istjqz8BZeTWbf41fbNhte7Y+YqZOwq1Sd0DbvYAD9NOZK9vlfuac0598HY+vtSBczUiKERHv1yRbcaQtZFh5wtiRrN04BLUTD21MycBX5jYchHjPY/wIDAQAB”

However, in the recent weeks, there is a growing set of ISPs that are mandating the other fields of a DKIM record. For example, RFC 5863 discusses DKIM verifier considerations in Appendix 1.2. The considerations around the DNS record talk about

If a DKIM verifier finds a selector record that has an empty “g” field (“g=;”) and it does not have a “v” field (“v=DKIM1;”) at its beginning, it is faced with deciding if this record was:

1. from a DK signer that transitioned to supporting DKIM but forgot to remove the “g” field (so that it could be used by both DK and DKIM verifiers); or

2. from a DKIM signer that truly meant to use the empty “g” field but forgot to put in the “v” field. It is advised that you treat such records using the first interpretation, and treat such records as if the signer did not have a “g” field in the record.

Looks like the notion of an empty “v” field is generating lot of noise from a few ISPs in the recent weeks. Even though the field is deemed optional, the interpretations by these ISPs have some ripple effects on how the other fields (like the granularity field) are seen in such situations.

Having read this RFC (and RFC 6376) a few times back and forth, it is time to test the DNS records by adding a  “v” field. I should be able to see the impact on those ISPs that changed their interpretations recently.

 

Enterprises And OpenStack

01 Jan

A technology might show lot of potential, but its commercial success in certain market segments is driven by a varied set of factors. Take OpenStack, the technology that comes to mind when we talk about private clouds, for example. The technology has all the right ideas for it to be the primary choice of implementation for any private cloud. However, its proliferation into enterprises with private cloud needs/implementations is still not up to the mark.

In this Gartner Blog Network article (that is almost a couple of months old), Alessandro Perilli discusses various factors that are deterring OpenStack from being at a place where it should be in enterprises. Beyond the views of Alessandro, one of the comments summarizes the challenges for OpenStack Governance Board – not having a firm say in limiting the features (and scope) and eventually trying to chew too many features. Right from the day the article is published, there is a good amount of discussion for/against the views expressed in the article. One resounding agreement across the fences is that OpenStack still has lot of potential for commercial success in enterprises. People at large enterprises might love the OpenStack technology, but never get to the level of advocating strongly for making it the first choice.

The year ahead is going to be an interesting time for private cloud implementations. Hope OpenStack uses this year to the fullest potential and makes itself a widely implemented solution within large enterprises.

 

Copenhagen Wheel

06 Dec

What happens when a premier academic institution teams up with the City of cyclists? You see one of the best inventions getting now into mass production.

MIT’s SENSEable City Lab and City of Copenhagen bring us a pedal assist electric system called the Copenhagen Wheel. The wheel can be retrofitted almost on any bicycle. Superpedestrian, the startup that has exclusive production rights for the technology, is now accepting back orders for the wheel. The wheel comes both in single speed and multi-speed variants. The wheel itself may cost lot more than an average bike, but I think the wheel is really worth it.

How does it work? The prime factor of the wheel is its regenerating braking capability. The rider can use the exercise mode, pedaling against the motor and charging the battery in this process. Or the rider can use the motor assist mode, in which case the battery power is used to help the rider pedal easily thru, say, slopes. Each of these modes will have 3 levels and the modes/levels can be selected using a smart phone that communicates to the wheel wirelessly. Not sure if the modes switch automatically using the torque sensors in the system (my guess is it would be so, but we need to wait for more details and see.)

The wheel may be a costly affair in the initial days (the wheel is much costlier than average bike on streets) but offers a great potential for bicycling adoption. Excitedly waiting for mass production and global availability of this wheel.

The Non-Human Moment

09 Nov

On 12th of November, we are going to see a first of its kind – the first non-human ringing the closing bell on NASDAQ. Its not a breakthrough moment in history, but it is a symbolic moment that describes the evolution of robotics outside academics and labs.

About 25 years ago when I first got exposed to the technology, Robotics was more of an academics concept and existed sparsely in labs. It took a slower-than-expected evolution for Robotics to reach where it is today. The concepts of robotics are very prevalent in well controlled industrial and manufacturing segments, but there is lot to accomplish for the technology to be viable in day-to-day life. If the technology and funding interest in Robotics in the recent years continues for some more time, we are for sure going to witness that viability becoming a reality soon. Keshav’s recent projects with Gunn Robotics team are fun to watch and are a good sign that the viability is kicking in. This related Forbes article discusses the business and research interest in the area of Robotics and is a good read.

On 12th November, I will for sure watch that closing bell. That would be more of another baby step moment for Robotics.