This is default featured slide 1 title

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam. blogger theme by Premiumblogtemplates.com

This is default featured slide 2 title

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam. blogger theme by Premiumblogtemplates.com

This is default featured slide 3 title

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam. blogger theme by Premiumblogtemplates.com

This is default featured slide 4 title

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam. blogger theme by Premiumblogtemplates.com

Monday, 28 October 2013

Last time we discussed how to access the SIM card and use it as a secure element to enhance Android applications. One of the main problems with this approach is that since SIM cards are controlled by the MNO any applets running on a commercial SIM have to be approved by them. Needless to say, that considerably limits flexibility. Fortunately, NFC-enabled Android devices can communicate with practically any external contactless smart card, and you can install anything on those. Let's explore how an NFC smart card can be used to sign email on Android.

NFC smart cards

As discussed in previous posts, a smart card is a secure execution environment on a single chip, typically packaged in a credit-card sized plastic package or the smaller 2FF/3FF/4FF form factors when used as a SIM card. Traditionally, smart cards connect with a card reader using a number of gold-plated contact pads. The pads are used to both provide power to the card and establish serial communication with its I/O interface. Size, electrical characteristics and communication protocols are defined in the 7816 series of ISO standards. Those traditional cards are referred to as 'contact smart cards'. Contactless cards on the other hand do not need to have physical contact with the reader. They draw power and communicate with the reader using RF induction. The communication protocol (T=CL) they use is defined in ISO 14443 and is very similar to the T1 protocol used by contact cards. While smart cards that have only a contactless interface do exist, dual-interface cards that have both contacts and an antenna for RF communication are the majority. The underlying RF standard used varies by manufacturer, and both Type A and Type B are common. 

As we know, NFC has three standard modes of operation: reader/writer (R/W), peer-to-peer (P2P) and card emulation (CE) mode. All NFC-enabled Android devices support R/W and P2P mode, and some can provide CE, either using a physical secure element (SE) or software emulation. All that is needed to communicate with a contactless smart card is the basic R/W mode, so they can be used on practically all Android devices with NFC support. This functionality is provided by the IsoDep class. It provides only basic command-response exchange functionality with the transceive() method, any higher level protocol need to be implemented by the client application.

Securing email

There have been quite a few new services that are trying to reinvent secure email in recent years. They are trying to make it 'easy' for users by taking care of key management and shifting all cryptographic operations to the server. As recent events have reconfirmed, introducing an intermediary is not a very good idea if communication between two parties is to be and remain secure. Secure email itself is hardly a new idea, and the 'old-school' way of implementing it relies on pubic key cryptography. Each party is responsible for both protecting their private key and verifying that the public key of their counterpart matches their actual identity. The method used to verify identity is the biggest difference between the two major secure email standards in use today, PGP and S/MIME. PGP relies on the so called 'web of trust', where everyone can vouch for the identity of someone by signing their key (usually after meeting them in person), and keys with more signatures can be considered trustworthy. S/MIME, on the other hand, relies on PKI and X.509 certificates, where the issuing authority (CA) is relied upon to verify identity when issuing a certificate. PGP has the advantage of being decentralized, which makes it harder to break the system by compromising  a single entity, as has happened with a number of public CAs in recent years. However, it requires much more user involvement and is especially challenging to new users. Additionally, while many commercial and open source PGP implementations do exist, most mainstream email clients do not support PGP out of the box and require the installation of plugins and additional software. On the other hand, all major proprietary (Outlook variants, Mail.app, etc) and open source (Thunderbird) email clients have built-in and mature S/MIME implementations. We will use S/MIME for this example because it is a lot easier to get started with and test, but the techniques described can be used to implement PGP-secured email as well. Let's first discuss how S/MIME is implemented.

Signing with S/MIME

The S/MIME, or Secure/Multipurpose Internet Mail Extensions, standard defines how to include signed and/or encrypted content in email messages. It specified both the procedures for creating  signed or encrypted (enveloped) content and the MIME media types to use when adding them to the message. For example, a signed message would have a part with the Content-Type: application/pkcs7-signature; name=smime.p7s; smime-type=signed-data which contains the message signature and any associated attributes. To an email client that does not support S/MIME, like most Web mail apps, this would look like an attachment called smime.p7s. S/MIME-compliant clients would instead parse and verify the signature and display some visual indication showing the signature verification status.

The more interesting question however is what's in smime.p7s? The 'p7' stands for PKCS#7, which is the predecessor of the current Cryptographic Message Syntax (CMS). CMS defines structures used to package signed, authenticated or encrypted content and related attributes. As with most PKI X.509-derived standards, those structures are ASN.1 based and encoded into binary using DER, just like certificates and CRLs. They are sequences of other structures, which are in turn composed of yet other ASN.1 structures, which are..., basically sequences all the way down. Let's try to look at the higher-level ones used for signed email. The CMS structure describing signed content is predictably called SignedData and looks like this:

SignedData ::= SEQUENCE {
version CMSVersion,
digestAlgorithms DigestAlgorithmIdentifiers,
encapContentInfo EncapsulatedContentInfo,
certificates [0] IMPLICIT CertificateSet OPTIONAL,
crls [1] IMPLICIT RevocationInfoChoices OPTIONAL,
signerInfos SignerInfos }

Here digestAlgorithms contains the OIDs of the hash algorithms used to produce the signature (one for each signer) and encapContentInfo describes the data that was signed, and can optionally contain the actual data. The optional certificates and crls fields are intended to help verify the signer certificate. If absent, the verifier is responsible for collecting them by other means. The most interesting part, signerInfos, contains the actual signature and information about the signer. It looks like this:

SignerInfo ::= SEQUENCE {
version CMSVersion,
sid SignerIdentifier,
digestAlgorithm DigestAlgorithmIdentifier,
signedAttrs [0] IMPLICIT SignedAttributes OPTIONAL,
signatureAlgorithm SignatureAlgorithmIdentifier,
signature SignatureValue,
unsignedAttrs [1] IMPLICIT UnsignedAttributes OPTIONAL }

Besides the signature value and algorithms used, SignedInfo contains signer identifier used to find the exact certificate that was used and a number of optional signed and unsigned attributes. Signed attributes are included when producing the signature value and can contain additional information about the signature, such as signing time. Unsigned attribute are not covered by the signature value, but can contain signed data themselves, such as counter signature (an additional signature over the signature value).

To sum this up, in order to produce a S/MIME signed message, we need to sign the email contents and any attributes, generate the SignedInfo structure, wrap it into a SignedData, DER encode the result and add it to the message using the appropriate MIME type. Sound easy, right? Let's how this can be done on Android.

Using S/MIME on Android

On any platform, you need two things in order to generate an S/MIME message: a cryptographic provider that can perform the actual signing using an asymmetric key and an ASN.1 parser/generator in order to generate the SignedData structure. Android has JCE providers that support RSA, recently even with hardware-backed keys. What's left is an ASN.1 generator. While ASN.1 and DER/BER have been around for ages, and there are quite a few parsers/generators, the practically useful choices  are not that many. No one really generates code directly from the ASN.1 modules found in related standards, most libraries implement only the necessary parts, building on available components. Both of Android's major cryptographic libraries, OpenSSL and Bouncy Castle contain ASN.1 parser/generators and have support for CMS. The related API's are not public though, so we need to include our own libraries.

As usual we turn to Spongy Castle, which is provides all of Bouncy Castle's functionality under a different namespace. In order to be able process CMS and generate S/MIME messages, we need the optional scpkix and scmail packages. The first one contains PKIX and CMS related classes, and the second one implements S/MIME. However, there is a twist: Android lacks some of the classes required for generating S/MIME messages. As you may know, Android has implementations for most standard Java APIs, with a few exceptions, most notably the GUI widget related AWT and Swing packages. Those are rarely missed, because Android has its own widget and graphics libraries. However, besides widgets AWT contains classes related to MIME media types as well. Unfortunately, some of those are  used in libraries that deal with MIME objects, such as JavaMail and the Bouncy Castle S/MIME implementation. JavaMail versions that include alternative AWT implementations, repackaged for Android have been available for some time, but since they use some non-standard package names, they are not a drop-in replacement. That applies to Spongy Castle as well: some source code modifications are required in order to get scmail to work with the javamail-android library.

With that sorted out, generating an S/MIME message on Android is just a matter of finding the signer key and certificate and using the proper Bouncy Castle and JavaMail APIs to generate and send the message:

PrivateKey signerKey = KeyChain.getPrivateKey(ctx, "smime");
X509Certificate[] chain = KeyChain.getCertificateChain(ctx, "smime");
X509Certificate signerCert = chain[0];
X509Certificate caCert = chain[1];

SMIMESignedGenerator gen = new SMIMESignedGenerator();
gen.addSignerInfoGenerator(new JcaSimpleSignerInfoGeneratorBuilder()
.setProvider("AndroidOpenSSL")
.setSignedAttributeGenerator(
new AttributeTable(signedAttrs))
.build("SHA512withRSA", signerKey, signerCert));
Store certs = new JcaCertStore(Arrays.asList(signerCert, caCert));
gen.addCertificates(certs);

MimeMultipart mm = gen.generate(mimeMsg, "SC");
MimeMessage signedMessage = new MimeMessage(session);
Enumeration headers = mimeMsg.getAllHeaderLines();
while (headers.hasMoreElements()) {
signedMessage.addHeaderLine((String) headers.nextElement());
}
signedMessage.setContent(mm);
signedMessage.saveChanges();

Transport.send(signedMessage);

Here we first get the signer key and certificate using the KeyChain API and then create an S/MIME generator by specifying the key, certificate, signature algorithm and signed attributes. Note that we specify the AndroidOpenSSL provider explicitly which is the only one that can use hardware-backed keys. This is only required if you changed the default provider order when installing Spongy Castle, by default AndroidOpenSSL is the preferred JCE provider. We then add the certificates we want to include in the generated SignedData and generate a multi-part MIME message that includes both the original message (mimeMsg) and the signature. Finally we send the message using the JavaMail Transport class. The JavaMail Session initialization is omitted from the example above, see the sample app for how to set it up to use Gmail's SMTP server. This requires the Gmail account password to be specified, but with a little more work it can be replaced with an OAuth token you can obtain from the system AccountManager.

So what about smart cards?

Using a MuscleCard to sign email

In order to sign email using keys stored on a smart card we need a few things: 
  • a dual-interface smart cards that supports RSA keys
  • a crypto applet that allows us to sign data with those keys
  • some sort of middleware that exposes card functionality through a standard crypto API
Most recent dual-interface JavaCards fulfill our requirements, but we will be using a NXP J3A081 which supports JavaCard 2.2.2 and 2048-bit RSA keys. When it comes to open source crypto applets though, unfortunately the choices are quite limited. Just about the only one that is both full-featured and well supported in middleware libraries is the venerable MuscleCard applet. We will be using one of the fairly recent forks, updated to support JavaCard 2.2 and extended APDUs. To load the applet on the card you need a GlobalPlatform-compatible loader application, like GPJ, and of course the CardManager keys. Once you have initialized it, you can personalize it by generating or importing keys and certificates. After that the card can be used in any application that supports PKCS#11, for example Thunderbird and Firefox. Because the card is dual-interface, practically any smart card reader can be used on desktops. When the OpenSC PKCS#11 module is loaded in Thunderbird the card will show up in the Security Devices dialog like this:


If the certificate installed in the card has your email in the Subject Alternative Name extension, you should be able send signed and encrypted emails (if you have the recipient's certificate, of course). But how to achieve the same thing in Android?

Using MuscleCard on Android

Android doesn't support PKCS#11 modules, so in order to expose the cards crypto functionality we could implement a custom JCE provider that provides card-backed implementations of the Signature and KeyStrore engine classes. That is quite a bit of work though, and since we are only targeting the Bouncy Castle S/MIME API, we can get away by implementing the ContentSigner interface. It provides an OutputStream clients write data to be signed to, an AlgorithmIdentifer for the signature method used and a getSignature() method that returns the actual signature value. Our MuscleCard-backed implementation could look like this:

class MuscleCardContentSigner implements ContentSigner {

private ByteArrayOutputStream baos = new ByteArrayOutputStream();
private MuscleCard msc;
private String pin;
...
@Override
public byte[] getSignature() {
msc.select();
msc.verifyPin(pin);

byte[] data = baos.toByteArray();
baos.reset();
return msc.sign(data);
}
}

Here the MuscleCard class is our 'middleware' and encapsulates the card's RSA signature functionality. It is implemented by sending the required command APDUs for each operation using Android's IsoDep API and aggregating and converting the result as needed. For example, the verifyPin() is implemented like this:

class MuscleCard {

private IsoDep tag;

public boolean verifyPin(String pin) throws IOException {
String cmd = String.format("B0 42 01 00 %02x %s", pin.length(),
toHex(pin.getBytes("ASCII")));
ResponseApdu rapdu = new ResponseApdu(tag.transceive(fromHex(cmd)));
if (rapdu.getSW() != SW_SUCCESS) {
return false;
}

return true;
}
}

Signing is a little more complicated because it involves creating and updating temporary I/O objects, but follows the same principle. Since the applet does not support padding or hashing, we need to generate and pad the PKCS#1 (or PSS) signature block on Android and send the complete data to the card. Finally, we need to plug our signer implementation into the Bouncy Castle CMS generator:

ContentSigner mscCs = new MuscleCardContentSigner(muscleCard, pin);
gen.addSignerInfoGenerator(new JcaSignerInfoGeneratorBuilder(
new JcaDigestCalculatorProviderBuilder()
.setProvider("SC")
.build()).build(mscCs, cardCert));

After that the signed message can be generated exactly like when using local key store keys. Of course, there are a few caveats. Since apps cannot control when an NFC connection is established, we can only sign data after the card has been picked up by the device and we have received an Intent with a live IsoDep instance. Additionally, since signing can take a few seconds, we need to make sure the connection is not broken by placing the device on top of the card (or use some sort of awkward case with a card slot). Our implementation also takes a few shortcuts by hard-coding the certificate object ID and size, as well as the card PIN, but those can be remedied with a little more code. The UI of our homebrew S/MIME client is shown below.


After you import a PKCS#12 file in the system credential store you can sign emails using the imported keys. The 'Sign with NFC' button is only enabled when a compatible card has been detected. The easiest way to verify the email signature is to send a message to a desktop client that supports S/MIME. There are also a few Android email apps that support S/MIME, but setup can be a bit challenging because they often use their own trust and key stores. You can also dump the generated message to external storage using MimeMessage.writeTo() and then parse the CMS structure using the OpenSSL cms command:

$ openssl cms -cmsout -in signed.message -noout -print
CMS_ContentInfo:
contentType: pkcs7-signedData (1.2.840.113549.1.7.2)
d.signedData:
version: 1
digestAlgorithms:
algorithm: sha512 (2.16.840.1.101.3.4.2.3)
parameter: NULL
encapContentInfo:
eContentType: pkcs7-data (1.2.840.113549.1.7.1)
eContent: <absent>
certificates:
d.certificate:
cert_info:
version: 2
serialNumber: 4
signature:
algorithm: sha1WithRSAEncryption (1.2.840.113549.1.1.5)
...
crls:
<empty>
signerInfos:
version: 1
d.issuerAndSerialNumber:
issuer: C=JP, ST=Tokyo, CN=keystore-test-CA
serialNumber: 3
digestAlgorithm:
algorithm: sha512 (2.16.840.1.101.3.4.2.3)
parameter: NULL
signedAttrs:
object: contentType (1.2.840.113549.1.9.3)
value.set:
OBJECT:pkcs7-data (1.2.840.113549.1.7.1)

object: signingTime (1.2.840.113549.1.9.5)
value.set:
UTCTIME:Oct 25 16:25:29 2013 GMT

object: messageDigest (1.2.840.113549.1.9.4)
value.set:
OCTET STRING:
0000 - 88 bd 87 84 15 53 3d d8-72 64 c7 36 f8 .....S=.rd.6.
000d - b0 f3 39 90 b2 a4 77 56-5c 9f e4 2e 7c ..9...wV\...|
001a - 7d 2e 0b 08 b4 b7 e7 6c-e9 b6 61 00 13 }......l..a..
0027 - 25 62 69 2a bc 08 5b 4c-4f c9 73 cf d3 %bi*..[LO.s..
0034 - c6 1e 51 c2 5f c1 64 77-3b 45 e2 cb ..Q._.dw;E..
signatureAlgorithm:
algorithm: rsaEncryption (1.2.840.113549.1.1.1)
parameter: NULL
signature:
0000 - a0 d0 ce 35 46 8c f9 cd-e5 db ed d8 e3 f0 08 ...5F..........
...
unsignedAttrs:
<empty>

Email encryption using the NFC smart card can be implemented in a similar fashion, but this time the card will be required when decrypting the message.

Summary

Practically all NFC-enabled Android devices can be used to communicate with a contactless or dual-interface smart card. If the interface of card applications is known, it is fairly easy to implement an Android component that exposes card functionality via a custom interface, or even as a standard JCE provider. The card's cryptographic functionality can then be used to secure email or provide HTTPS and VPN authentication. This could be especially useful when dealing with keys that have been generated on the card and cannot be extracted. If a PKCS#12 backup file is available, importing the file in the system credential store can provide a better user experience and comparable security levels if the device has a hardware-backed credential store. 

Friday, 27 September 2013

Our last post introduced one of Android 4.3's more notable security features -- improved credential storage, and while there are a few other enhancements worth discussing, this post will slightly change direction. As mentioned previously, mobile devices can include some form of a Secure Element (SE), but a smart card based UICC (usually called just 'SIM card') is almost universally present. Virtually all SIM cards in use today are programmable and thus can be used as a SE. Continuing the topic of hardware-backed security, we will now look into how SIMs can be programmed and used to enhance the security of Android applications.

SIM cards

First, a few words about terminology: while the correct term for modern mobile devices is UICC (Universal Integrated Circuit Card), since the goal of this post is not to discuss the differences between mobile networks, we will usually call it a 'SIM card' and only make the distinction when necessary. 

So what is a SIM card? 'SIM' stands for Subscriber Identity Module and refers to a smart card that securely stores the subscriber identifier and the associated key used to identify and authenticate to a mobile network. It was originally used on GSM networks and standards were later extended to support 3G and LTE. Since SIMs are smart cards, they conform to ISO-7816 standards regarding physical characteristics and electrical interface. Originally they were the same size as 'regular' smart cards (Full-size, FF), but by far the most popular sizes nowadays are Mini-SIM (2FF) and Micro-SIM (3FF), with Nano-SIM (4FF) introduced in 2012. 

Of course, not every smart that fits in the SIM slot can be used in a mobile device, so the next question is: what makes a smart card a SIM card? Technically, it's conformance to mobile communication standards such 3GPP TS 11.11 and certification by the SIMalliance. In practice it is the ability to run an application that allows it to communicate with the phone (referred to as 'Mobile Equipment', ME, or 'Mobile Station', MS in related standards) and connect to a mobile network. While the original GSM standard did not make a  distinction between the physical smart card and the software required to connect to the mobile network, with the introduction of 3G standards, a clear distinction has been made. The physical smart card is referred to as Universal Integrated Circuit Card (UICC) and different mobile network applications than run on it have been defined: GSM, CSIM, USIM, ISIM, etc. A UICC can host and run more than one network application (hence 'universal'), and thus can be used to connect to different networks. While network application functionality depends on the specific mobile network, their core features are quite similar: store network parameters securely and identify to the network, as well as authenticate the user (optionally) and store user data. 

SIM card applications

Let's take GSM/3G as an example and briefly review how a network application works. For GSM the main network parameters are network identity (International Mobile Subscriber Identity, IMSI; tied to the SIM), phone number  (MSISDN, used for routing calls and changeable) and a shared network authentication key Ki. To connect to the network the MS needs to authenticate itself and negotiate a session key. Both authentication and session key derivation make use of Ki, which is also known to the network and looked up by IMSI. The MS sends a connection request and includes its IMSI, which the network uses to find the corresponding Ki. The network then uses the Ki to generate a challenge (RAND), expected challenge response (SRES) and session key Kc and sends RAND to the MS. Here's where the GSM application running on the SIM card comes into play: the MS passes the RAND to the SIM card, which in turn generates its own SRES and Kc. The SRES is sent to the network and if it matches the expected value, encrypted communication is established using the session key Kc. As you can see, the security of this protocol hinges solely on the secrecy of the Ki. Since all operations involving the Ki are implemented inside the SIM and it never comes with direct contact with neither the MS or the network, the scheme is kept reasonably secure. Of course, security depends on the encryption algorithms used as well, and major weaknesses that allow intercepted GSM calls to be decrypted using off-the shelf hardware were found in the original versions of the A3/A5 algorithms (which were initially secret). Jumping back to Android for a moment, all of this is implemented by the baseband software (more on this later) and network authentication is never directly visible to the main OS.

We've shown that SIM cards need to run applications, let's now say a few words about how those applications are implemented and installed. Initial smart cards were based on a file system model, where files (elementary files, EF) and directories (dedicated files, DF) were named with a two-byte identifier. Thus developing 'an application' consisted mostly of selecting an ID for the DF that hosts its files (called ADF), and specifying the formats and names of EFs that store data. For example, the GSM application is under the '7F20' ADF, and the USIM ADF hosts the EF_imsi, EF_keys, EF_sms, etc. files. Practically all SIMs used today are based on Java Card technology and implement GlobalPlatform card specifications. Thus all network applications are implemented as Java Card applets and emulate the legacy file-based structure for backward compatibility. Applets are installed according to GlobalPlatform specifications by authenticating to the Issuer Security Domain (Card Manager) and issuing LOAD and INSTALL commands.

One application management feature specific to SIM cards is support for OTA (Over-The-Air) updates via binary SMS. This functionality is not used by all carries, but it allows them to remotely install applets on SIM cards they have issued. OTA is implemented by wrapping card commands (APDUs) in SMS T-PDUs, which the ME forwards to the SIM (ETSI TS 102 226). In most SIMs this is actually the only way to load applets on the card, even during initial personalization. That is why most of the common GlobalPlatform-compliant tools cannot be used as is for managing SIMs. One needs to either use a tool that supports SIM OTA, such as the SIMalliance Loader, or implement APDU wrapping/unwrapping, including any necessary encryption and integrity algorithms (ETSI TS 102 225). Incidentally, problems with the implementation of those secured packets on some SIMs that use DES as the encryption and integrity algorithm have been used to crack OTA update keys. The major use of the OTA functionality is to install and maintain SIM Toolkit (STK) applications which can interact with the handset via standard 'proactive' (in reality implemented via polling) commands and display menus or even open Web pages and send SMS. While STK applications are almost unheard of in the US and Asia, they are still heavily used in some parts of Europe and Africa for anything from mobile banking to citizen authentication. Android also supports STK with a dedicated STK system app, which is automatically disabled if the SIM card has not STK applets installed.

Accessing the SIM card

As mentioned above, network related functionality is implemented by the baseband software and what can be done from Android is entirely dependent on what features the baseband exposes. Android supports STK applications, so it does have internal support for communicating to the SIM, but the OS security overview explicitly states that 'low level access to the SIM card is not available to third-party apps'. So how can we use it as an SE then? Some Android builds from major vendors, most notably Samsung, provide an implementation of the SIMalliance Open Mobile API on some handsets and an open source implementation (for compatible devices) is available from the SEEK for Android project. The Open Mobile API aims to provide a unified interface for accessing SEs on Android, including the SIM. To understand how the Open Mobile API works and the cause of its limitations, let's first review how access to the SIM card is implemented in Android.

On Android devices all mobile network functionality (dialing, sending SMS, etc.) is provided by the baseband processor (also referred to as 'modem' or 'radio'). Android applications and system services communicate to the baseband only indirectly via the Radio Interface Layer (RIL) daemon (rild). It in turn talks to the actual hardware by using a manufacturer-provided RIL HAL library, which wraps the proprietary interface the baseband provides. The SIM card is typically connected only to baseband processor (sometimes also to the NFC controller via SWP), and thus all communication needs to go through the RIL. While the proprietary RIL implementation can always access the SIM in order to perform network identification and authentication, as well as read/write contacts and access STK applications, support for transparent APDU exchange is not always available. The standard way to provide this feature is to use extended AT commands such AT+CSIM (Generic SIM access) and AT+CGLA (Generic UICC Logical Channel Access), as defined in 3GPP TS 27.007, but some vendors implement it using proprietary extensions, so support for the necessary AT commands does not automatically provide SIM access.

SEEK for Android provides patches that implement a resource manager service (SmartCardService) that can connect to any supported SE (embedded SE, ASSD or UICC) and extensions to the Android telephony framework that allow for transparent APDU exchange with the SIM. As mentioned above, access through the RIL is hardware and proprietary RIL library dependent, so you need both a compatible device and a build that includes the SmartCardService and related framework extensions. Thanks to some work by they u'smile project, UICC access on most variants of the popular Galaxy S2 and S3 handsets is available using a patched CyanogenMod build, so you can make use of the latest SEEK version. Even if you don't own one of those devices, you can use the SEEK emulator extension which lets you use a standard PC/SC smart card reader to connect a SIM to the Android emulator. Note that just any regular Java card won't work out of the box because the emulator will look for the GSM application and mark the card as not usable if it doesn't find one. You can modify it to skip those steps, but a simple solution is to install a dummy GSM application that always returns the expected responses.

Once you have managed to get a device or the emulator to talk to the SIM, using the OpenMobile API to send commands is quite straightforward:

// connect to the SE service, asynchronous
SEService seService = new SEService(this, this);
// list readers
Reader[] readers = seService.getReaders();
// assume the first one is SIM and open session
Session session = readers[0].openSession();
// open logical (or basic) channel
Channel channel = session.openLogicalChannel(aid);
// send APDU and get response
byte[] rapdu = channel.transmit(cmd);

You will need to request the org.simalliance.openmobileapi.SMARTCARD permission and add the org.simalliance.openmobileapi extension library to your manifest for this to work. See the official wiki for more details.

<manifest ...>

<uses-permission android:name="org.simalliance.openmobileapi.SMARTCARD" />

<application ...>
<uses-library
android:name="org.simalliance.openmobileapi"
android:required="true" />
...
</application>
</manifest>

SE-enabled Android applications

Now that we can connect to the SIM card from applications, what can we use it for? Just as regular smart cards, an SE can be used to store data and keys securely and perform cryptographic operations without keys having to leave the card. One of the usual applications of smart cards is to store RSA authentication keys and certificates that are used from anything from desktop logon to VPN or SSL authentication. This is typically implemented by providing some sort of middleware library, usually a standard cryptographic service provider (CSP) module that can plug into the system CSP or be loaded by a compatible application. As the Android security model does not allow system extensions provided by third party apps, in order to integrate with the system key management service, such middleware would need to be implemented as a keymaster module for the system credential store (keystore) and be bundled as a system library. This can be accomplished by building a custom ROM which installs our custom keymaster module, but we can also take advantage of the SE without rebuilding the whole system. The most straightforward way to do this is to implement the security critical part of an app inside the SE and have the app act as a client that only provides a user-facing GUI. One such application provided with the SEEK distribution is an SE-backed one-time password (OTP) Google Authenticator app. Since the critical part of OTP generators is the seed (usually a symmetric cryptographic key), they can easily be cloned once the seed is obtained or extracted. Thus OTP apps that store the seed in a regular file (like the official Google Authenticator app) provide little protection if the device OS is compromised. The SEEK GoogleOtpAuthenticator app both stores the seed and performs OTP generation inside the SE, making it impossible to recover the seed from the app data stored on the device.

Another type of popular application that could benefit from using an SE is a password manager. Password managers typically use a user-supplied passphrase to derive a symmetric key, which is in turn used to encrypt stored passwords. This makes it hard to recover stored passwords without knowing the passphrase, but naturally security level is totally dependent on its complexity. As usual, because typing a long string with rarely used characters on a mobile device is not a particularly pleasant experience, users tend to pick easier to type, low-entropy passphrases. If the key is stored in an SE, the passphrase can be skipped or replaced with a simpler PIN, making the password manager app both more user-friendly and secure. Let's see how such an SE-backed password manager can be implemented using a Java Card applet and the Open Mobile API.

DIY SIM password manager

Ideally, all key management and encryption logic should be implemented inside the SE and the client application would only provide input (plain text passwords) and retrieve opaque encrypted data. The SE applet should not only provide encryption, but also guarantee the integrity of encrypted data either by using an algorithm that provides authenticated encryption (which most smart card don't natively support currently) or by calculating a MAC over the encrypted data using HMAC or some similar mechanism. Smart cards typically provide some sort of encryption support, starting with DES/3DES for low-end models and going up to RSA and EC for top-of-the-line ones. Since public key cryptography is typically not needed for mobile network authentication or secure OTA (which is based on symmetric algorithms), SIM cards rarely support RSA or EC. A reasonably secure symmetric and hash algorithm should be enough to implement a simple password manager though, so in theory we should be able to use even a lower-end SIM.

As mentioned in the previous section, all recent SIM cards are based on Java Card technology, and it is possible to develop and load a custom applet, provided one has access to the Card Manager or OTA keys. Those are naturally not available for commercial MNO SIMs, so we would need to use a blank 'programmable' SIM that allows for loading applets without authentication or comes bundled with the required keys. Those are quite hard, but not impossible to come by, so let's see how such a password manager applet could be implemented. We won't discuss the basics of Java Card programming, but jump straight to the implementation. Refer to the offical documentation, or a tutorial if you need an introduction.

The Java Card API provides a subset of the JCA classes, with an interface optimized towards using pre-allocated, shared byte arrays, which is typical on a memory constrained platform such as a smart card. A basic encryption example would look something like this:

byte[] buff = apdu.getBuffer();
//..
DESKey deskey = (DESKey)KeyBuilder.buildKey(KeyBuilder.TYPE_DES_TRANSIENT_DESELECT,
KeyBuilder.LENGTH_DES3_2KEY, false);
deskey.setKey(keyBytes, (short)0);
Cipher cipher = Cipher.getInstance(Cipher.ALG_DES_CBC_PKCS5, false);
cipher.init(deskey, Cipher.MODE_ENCRYPT);
cipher.doFinal(data, (short) 0, (short) data.length,
buff, (short) 0);

As you can see, a dedicated key object, that is automatically cleared when the applet is deselected, is first created and then used to initialize a Cipher instance. Besides the unwieldy number of casts to short (necessary because 'classic' Java Card does not support int, but it is still the default integer type) the code is very similar to what you would find in a Java SE or Android application. Hashing uses the MessageDigest class and follows a similar routine. Using the system-provided Cipher and MessageDigest classes as building blocks it is fairly straightforward to implement CBC mode encryption and HMAC for data integrity. However as it happens, our low end SIM card does not provide usable implementations of those classes (even though the spec sheet claims they do), so we would need to start from scratch. Fortunately, since Java cards can execute arbitrary programs (as long as they fit in memory), it is also possible to include our own encryption algorithm implementation in the applet. Even better, a Java Card optimized AES implementation is freely available. This implementation provides only the basic pieces of AES -- key schedule generation and single block encryption, so some additional work is required to match the Java Cipher class functionality. The bigger downside is that by using an algorithm implemented in software we cannot take advantage of the specialized crypto co-processor most smart cards have. With this implementation our SIM (8-bit CPU, 6KB RAM) card takes about 2 seconds to process a single AES block with a 128-bit key. The performance can be improved slightly by reducing the number of AES round to 7 (10 are recommended for 128-bit keys), but that will both lower the security level of the system and result in an non-standard cipher, making testing more difficult. Another disadvantage is that native key objects are usually stored in a secured memory area that is better protected from side channel attacks, but by using our own cipher we are forced to store keys in regular byte arrays. With those caveats, this AES implementation should give us what we need for our demo application. Using the JavaCardAES class as a building block, our AES CBC encryption routine would look something like this:

aesCipher.RoundKeysSchedule(keyBytes, (short) 0, roundKeysBuff);
short padSize = addPadding(cipherBuff, offset, len);
short paddedLen = (short) (len + padSize);
short blocks = (short) (paddedLen / AES_BLOCK_LEN);

for (short i = 0; i < blocks; i++) {
short cipherOffset = (short) (i * AES_BLOCK_LEN);
for (short j = 0; j < AES_BLOCK_LEN; j++) {
cbcV[j] ^= cipherBuff[(short) (cipherOffset + j)];
}
aesCipher.AESEncryptBlock(cbcV, OFFSET_ZERO, roundKeysBuff);
Util.arrayCopyNonAtomic(cbcV, OFFSET_ZERO, cipherBuff,
cipherOffset, AES_BLOCK_LEN);
}

Not as concise as using the system crypto classes, but gets the job done. Finally (not shown), the IV and cipher text are copied to the APDU buffer and sent back to the caller. Decryption follows a similar pattern. One thing that is obviously missing is the MAC, but as it turns out a hash algorithm implemented in software is prohibitively slow on our SIM (mostly because it needs to access large tables stored in the slow card EEPROM). While a MAC can be also implemented using the AES primitive, we have omitted it from the sample applet. In practice tampering with the cipher text of encrypted passwords would only result in incorrect passwords, but it is still a good idea to use a MAC when implementing this on a fully functional Java Card.

Our applet can now perform encryption and decryption, but one critical piece is still missing -- a random number generator. The Java Card API has the RandomData class which is typically used to generate key material and IVs for cryptographic operations, but just as with the Cipher class it is not available on our SIM. Therefore, unfortunately, we need to apply the DIY approach again. To keep things simple and with a (somewhat) reasonable response time, we implement a simple pseudo random number generator (PRNG) based on AES in counter mode. As mentioned above, the largest integer type in classic Java Card is short, so the counter will wrap as soon as it goes over 32767. While this can be overcome fairly easily by using a persistent byte array to simulate a long (or BigInteger if you are more ambitious), the bigger problem is that there is no suitable source of entropy on the smart card that we can use to seed the PRNG. Therefore the PRNG AES key and nonce need to be specified at applet install time and be unique to each SIM. Our simplistic PRNG implementation based on the JavaCardAES class is shown below (buff is the output buffer):

Util.arrayCopyNonAtomic(prngNonce, OFFSET_ZERO, cipherBuff,
OFFSET_ZERO, (short) prngNonce.length);
Util.setShort(cipherBuff, (short) (AES_BLOCK_LEN - 2), prngCounter);

aesCipher.RoundKeysSchedule(prngKey, (short) 0, roundKeysBuff);
aeCipher.AESEncryptBlock(cipherBuff, OFFSET_ZERO, roundKeysBuff);
prngCounter++;

Util.arrayCopyNonAtomic(cipherBuff, OFFSET_ZERO, buff, offset, len);

The recent Bitcoin app problems traced to a repeatable PRNG in Android, controversy around the Dual_EC_DRBG PRNG algorithm, which is both believed to be weak by design and is used by default in popular crypto toolkits and finally the low-quality hardware RNG found in FIPS certified smart cards have highlighted the critical impact a flawed PRNG can have on any system that uses cryptography. That is why a DIY PRNG is definitely not something you would like to use in a production system. Do find a SIM that provides working crypto classes and do use RandomData.ALG_SECURE_RANDOM to initialize the PRNG (that won't help much if the card's hardware RNG is flawed, of course).

With that we have all the pieces needed to implement the password manager applet, and what is left is to define and expose a public interface. For Java Card this means defining the values of the CLA and INS bytes the applet can process. Besides the obviously required encrypt and decrypt commands, we also provide commands to get the current state, initialize and clear the applet.

static final byte CLA = (byte) 0x80;
static final byte INS_GET_STATUS = (byte) 0x1;
static final byte INS_GEN_RANDOM = (byte) 0x2;
static final byte INS_GEN_KEY = (byte) 0x03;
static final byte INS_ENCRYPT = (byte) 0x4;
static final byte INS_DECRYPT = (byte) 0x5;
static final byte INS_CLEAR = (byte) 0x6;

Once we have a working applet, implementing the Android client is fairly straightforward. We need to connect to the SEService, open a logical channel to our applet (AID: 73 69 6d 70 61 73 73 6d 61 6e 01) and send the appropriate APDUs using the protocol outlined above. For example, sending a string to be encrypted requires the following code (assuming we already have an open Session to the SE). Here 0x9000 is the standard ISO 7816-3/4 success status word (SW):

Channel channel = session.openLogicalChannel(fromHex("73 69 6d 70 61 73 73 6d 61 6e 01"));
byte[] data = "password".getBytes("ASCII");
String cmdStr = "80 04 00 00 " + String.format("%02x", data.length)
+ toHex(data) + "00";
byte[] rapdu = channel.transmit(fromHex(cmdStr));
short sw = (short) ((rapdu [rapdu.length - 2] << 8) | (0xff & rapdu [rapdu.length - 1]));
if (sw != (short)0x9000) {
// handle error
}
byte[] ciphertext = Arrays.copyOf(rapdu, rapdu.length - 2);
String encrypted= Base64.encodeToString(ciphertext, Base64.NO_WRAP);

Besides calling applet operations by sending commands to the SE, the sample Android app also has a simple database to store encrypted passwords paired with a description, and displays currently managed passwords in a list view. Long pressing on the password name will bring up a contextual action that allows you to decrypt and temporarily display the password so you can copy it and paste it into the target application. The current implementation does not require a PIN to decrypt passwords, but one can easily by provided using Java Card's OwnerPIN class, optionally disabling the applet once a number of incorrect tries is reached. While this app can hardly compete with popular password managers, it has enough functionality to both illustrate the concept of an SE-backed app and be practially useful. Passwords can be added by pressing the '+' action item and the delete item clears the encryption key and PRNG counter, but not the PRNG seed and nonce. A screenshot of the award-winning UI is shown below. Full source code for both the applet and the Android app is available on Github.


Summary

The AOSP version of Android does not provide a standard API to use the SIM card as a SE, but many vendors do, and as long as the device baseband and RIL support APDU exchange, one can be added by using the SEEK for Android patches. This allows to improve the security of Android apps by using the SIM as a secure element and both store sensitive data and implement critical functionality inside it. Commercial SIM do not allow for installing arbitrary user applications, but applets can be automatically loaded by the carrier using the SIM OTA mechanism and apps that take advantage of those applets can be distributed through regular channels, such as the Play Store.

Thanks to Michael for developing the Galaxy S2/3 RIL patch and helping with getting it to work on my somewhat exotic S2.

Tuesday, 20 August 2013

Our previous post was not related to Android security, but happened to coincide with the Android 4.3 announcement. Now that the post-release dust has settled, time to give it a proper welcome here as well. Being a minor update, there is nothing ground-breaking, but this 'revenge of the beans' brings some welcome enhancements and new APIs. Enough of those are related to security for some to even call 4.3 a 'security release'. Of course, the big star is SELinux, but credential storage, which has been a somewhat recurring topic on this blog, got a significant facelift too, so we'll look into it first. This post will focus mainly on the newly introduced features and interfaces, so you might want to review previous credential storage posts before continuing.

What's new in 4.3

First and foremost, the system credential store, now officially named 'Android Key Store' has a public API for storing and using app-private keys. This was possible before too, but not officially supported and somewhat clunky on pre-ICS devices. Next, while only the primary (owner) user could use the system key store pre-4.3, now it is multi-user compatible and each user gets their own keys. Finally, there is an API and even a system settings field that lets you check whether the credential store is hardware-backed (Nexus 4, Nexus 7) or software only (Galaxy Nexus). While the core functionality hasn't changed much since the previous release, the implementation strategy has evolved quite a bit, so we will look briefly into that too. That's a lot to cover, so lets' get started.

Public API

The API is outlined in the 'Security' section of the 4.3 new API introduction page, and details can be found in the official SDK reference, so we will only review it briefly. Instead of introducing yet another Android-specific API, key store access is exposed via standard JCE APIs, namely KeyGenerator and KeyStore. Both are backed by a new Android JCE provider, AndroidKeyStoreProvider and are accessed by passing "AndroidKeyStore" as the type parameter of the respective factory methods (those APIs were actually available in 4.2 as well, but were not public). For a full sample detailing their usage, refer to the BasicAndroidKeyStore project in the Android SDK. To introduce their usage briefly, first you create a KeyPairGeneratorSpec that describes the keys you want to generate (including a self-signed certificate), initialize a KeyPairGenerator with it and then generate the keys by calling generateKeyPair(). The most important parameter is the alias, which you then pass to KeyStore.getEntry() in order to get a handle to the generated keys later. There is currently no way to specify key size or type and generated keys default to 2048 bit RSA. Here's how all this looks like:

// generate a key pair
Context ctx = getContext();
Calendar notBefore = Calendar.getInstance()
Calendar notAfter = Calendar.getInstance();
notAfter.add(1, Calendar.YEAR);
KeyPairGeneratorSpec spec = new KeyPairGeneratorSpec.Builder(ctx)
.setAlias("key1")
.setSubject(
new X500Principal(String.format("CN=%s, OU=%s", alais,
ctx.getPackageName())))
.setSerialNumber(BigInteger.ONE).setStartDate(notBefore.getTime())
.setEndDate(notAfter.getTime()).build();

KeyPairGenerator kpGenerator = KeyPairGenerator.getInstance("RSA", "AndroidKeyStore");
kpGenerator.initialize(spec);
KeyPair kp = kpGenerator.generateKeyPair();

// in another part of the app, access the keys
KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
KeyStore.PrivateKeyEntry keyEntry = (KeyStore.PrivateKeyEntry)keyStore.getEntry("key1", null);
RSAPublicKey pubKey = (RSAPublicKey)keyEntry.getCertificate().getPublicKey();
RSAPrivateKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();

If the device has a hardware-backed key store implementation, keys will be generated outside of the Android OS and won't be directly accessible even to the system (or root user). If the implementation is software only, keys will be encrypted with a per-user key-encryption master key. We'll discuss key protection in detail later.

Android 4.3 implementation

This hardware-backed design was initially implemented in the original Jelly Bean release (4.1), so what's new here? Credential storage has traditionally (since the Donut days), been implemented as a native keystore daemon that used a local socket as its IPC interface. The daemon has finally been retired and replaced with a 'real' Binder service, which implements the IKeyStoreService interface. What's interesting here is that the service is implemented in C++, which is somewhat rare in Android. See the interface definition for details, but compared to the original keymaster-based implementation, IKeyStoreService gets 4 new operations: getmtime(), duplicate(), is_hardware_backed() and clear_uid(). As expected, getmtime() returns the key modification time and duplicate() copies a key blob (used internally for key migration). is_hardware_backed will query the underlying keymaster implementation and return true when it is hardware-backed. The last new operation, clear_uid(), is a bit more interesting. As we mentioned, the key store now supports multi-user devices and each user gets their own set of keys, stored in /data/misc/keystore/user_N, where N is the Android user ID. Keys names (aliases) are mapped to filenames as before, and the owner app UID now reflects the Android user ID as well. When an app that owns key store-managed keys is uninstalled for a user, only keys created by that user are deleted. If an app is completely removed from the system, its keys are deleted for all users. Since key access is tied to the app UID, this prevents a different app that happens to get the same UID from accessing an uninstalled app's keys. Key store reset, which deletes both key files and the master key, also affects only the current user. Here's how key files for the primary user might look like:

1000_CACERT_ca
1000_CACERT_cacert
10248_USRCERT_myKey
10248_USRPKEY_myKey
10293_USRCERT_rsa_key0
10293_USRPKEY_rsa_key0

The actual files are owned by the keystore service (which runs as the keystore Linux user) and it checks the calling UID to decide whether to grant or deny access to a key file, just as before. If the keys are protected by hardware, key files may contain only a reference to the actual key and deleting them may not destroy the underlying keys. Therefore, the del_key() operation is optional and may not be implemented.

The hardware in 'hardware-backed'

To give some perspective to the whole 'hardware-backed' idea, let's briefly discuss how it is implemented on the Nexus 4. As you may now, the Nexus 4 is based on Qualcomm's Snapdragon S4 Pro APQ8064 SoC. Like most recent ARM SoC's it is TrustZone-enabled and Qualcomm implement their Secure Execution Environment (QSEE) on top of it. Details are, as usual, quite scarce, but trusted application are separated from the main OS and the only way to interact with them is through the controlled interface the /dev/qseecom device provides. Android applications that wish to interact with the QSEE load the proprietary libQSEEComAPI.so library and use the functions it provides to send 'commands' to the QSEE. As with most other SEEs, the QSEECom communication API is quite low-level and basically only allows for exchanging binary blobs (typically commands and replies), whose contents entirely depends on the secure app you are communicating with. In the case of the Nexus 4 keymaster, the used commands are: GENERATE_KEYPAIR, IMPORT_KEYPAIR, SIGN_DATA and VERIFY_DATA. The keymaster implementation merely creates command structures, sends them via the QSEECom API and parses the replies. It does not contain any cryptographic code itself.

An interesting detail is that, the QSEE keystore trusted app (which may not be a dedicated app, but part of more general purpose trusted application) doesn't return simple references to protected keys, but instead uses proprietary encrypted key blobs (not unlike nCipher Thales HSMs). In this model, the only thing that is actually protected by hardware is some form of 'master' key-encryption key (KEK), and user-generated keys are only indirectly protected by being encrypted with the KEK. This allows for practically unlimited number of protected keys, but has the disadvantage that if the KEK is compromised, all externally stored key blobs are compromised as well (of course, the actual implementation might generate a dedicated KEK for each key blob created or the key can be fused in hardware; either way no details are available). Qualcomm keymaster key blobs are defined in AOSP code as shown below. This suggest that private exponents are encrypted using AES, most probably in CBC mode, with an added HMAC-SHA256 to check encrypted data integrity. Those might be further encrypted with the Android key store master key when stored on disk.

#define KM_MAGIC_NUM     (0x4B4D4B42)    /* "KMKB" Key Master Key Blob in hex */
#define KM_KEY_SIZE_MAX (512) /* 4096 bits */
#define KM_IV_LENGTH (16) /* AES128 CBC IV */
#define KM_HMAC_LENGTH (32) /* SHA2 will be used for HMAC */

struct qcom_km_key_blob {
uint32_t magic_num;
uint32_t version_num;
uint8_t modulus[KM_KEY_SIZE_MAX];
uint32_t modulus_size;
uint8_t public_exponent[KM_KEY_SIZE_MAX];
uint32_t public_exponent_size;
uint8_t iv[KM_IV_LENGTH];
uint8_t encrypted_private_exponent[KM_KEY_SIZE_MAX];
uint32_t encrypted_private_exponent_size;
uint8_t hmac[KM_HMAC_LENGTH];
};

So, in the case of the Nexus 4, the 'hardware' is simply the ARM SoC. Are other implementations possible? Theoretically, a hardware-backed keymaster implementation does not need to be based on TrustZone. Any dedicated device that can generate and store keys securely can be used, the usual suspects being embedded secure elements (SE) and TPMs. However, there are no mainstream Android devices with dedicated TPMs and recent flagship devices have began shipping without embedded SEs, most probably due to carrier pressure (price is hardly a factor, since embedded SEs are usually in the same package as the NFC controller). Of course, all mobile devices have some form of UICC (SIM card), which typically can generate and store keys, so why not use that? Well, Android still doesn't have a standard API to access the UICC, even though 'vendor' firmwares often include one. So while one could theoretically implement a UICC-based keymaster module compatible with the UICC's of your friendly neighbourhood MNO, it is not very likely to happen.

Security level

So how secure are you brand new hardware-backed keys? The answer is, as usual, it depends. If they are stored in a real, dedicated, tamper-resistant hardware module, such as an embedded SE, they are as secure as the SE. And since this technology has been around for over 40 years, and even recent attacks are only effective against SEs using weak encryption algorithms, that means fairly secure. Of course, as we mentioned in the previous section, there are no current keymaster implementations that use actual SEs, but we can only hope.

What about TrustZone? It is being aggressively marketed as a mobile security 'silver bullet' and streaming media companies have embraced it as an 'end-to-end' DRM solution, but does it really deliver? While the ARM TrustZone architecture might be sound at its core, in the end trusted applications are just software that runs at a slightly lower level than Android. As such, they can be readily reverse engineered, and of course vulnerabilities have been found. And since they run within the Secure World they can effectively access everything on the device, including other trusted applications. When exploited, this could lead to very effective and hard to discover rootkits. To sum this up, while TrustZone secure applications might provide effective protection against Android malware running on the device, given physical access, they, as well as the TrustZone kernel, are exploitable themselves. Applied to the Android key store, this means that if there is an exploitable vulnerability in any of the underlying trusted applications the keymaster module depends on, key-encryption keys could be extracted and 'hardware-backed' keys could be compromised.

Advanced usage

As we mentioned in the first section, Android 4.3 offers a well defined public API to the system key store. It should be sufficient for most use cases, but if needed you can connect to the keystore service directly (as always, not really recommended). Because it is not part of the Android SDK, the IKeyStoreService doesn't have wrapper 'Manager' class, so if you want to get a handle to it, you need to get one directly from the ServiceManager. That too is hidden from SDK apps, but, as usual, you can use reflection. From there, it's just a matter of calling the interface methods you need (see sample project on Github). Of course, if the calling UID doesn't have the necessary permission, access will be denied, but most operations are available to all apps.

Class smClass = Class.forName("android.os.ServiceManager");
Method getService = smClass.getMethod("getService", String.class);
IBinder binder = (IBinder) getService.invoke(null, "android.security.keystore");
IKeystoreService keystore = IKeystoreService.Stub.asInterface(binder);

By using the IKeyStoreService directly you can store symmetric keys or other secret data in the system key store by using the put() method, which the current java.security.KeyStore implementation does not allow (it can only store PrivateKey's). Such data is only encrypted by the key store master key, and even the system key store is hardware-backed, data is not protected by hardware in any way.

Accessing hidden services is not the only way to augment the system key store functionality. Since the sign() operation implements a 'raw' signature operation (RSASP1 in RFC 3447), key store-managed (including hardware-backed) keys can be used to implement signature algorithms not natively supported by Android. You don't need to use the IKeyStoreService interface, because this operation is available through the standard JCE Cipher interface:

KeyStore ks = KeyStore.getInstance("AndroidKeyStore");
ks.load(null);
KeyStore.Entry keyEntry = keyStore.getEntry("key1", null);
RSAPrivteKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();

Cipher c = Cipher.getInstance("RSA/ECB/NoPadding");
cipher.init(Cipher.ENCRYPT_MODE, i privateKey);
byte[] result = cipher.doFinal(in, o, in.length);

If you use this primitive to implement, for example, Bouncy Castle's AsymmetricBlockCipher interface, you can use any signature algorithm available in the Bouncy Castle lightweight API (we actually use Spongy Castle to stay compatible with Android 2.x without too much hastle). For example, if you want to use a more modern (and provably secure) signature algorithm than Android's default PKCS#1.5 implementation, such as RSA-PSS you can accomplish it with something like this (see sample project for AndroidRsaEngine):

AndroidRsaEngine rsa = new AndroidRsaEngine("key1", true);

Digest digest = new SHA512Digest();
Digest mgf1digest = new SHA512Digest();
PSSSigner signer = new PSSSigner(rsa, digest, mgf1digest, 512 / 8);
RSAKeyParameters params = new RSAKeyParameters(false,
pubKey.getModulus(), pubKey.getPublicExponent());

signer.init(true, params);
signer.update(signedData, 0, signedData.length);
byte[] signature = signer.generateSignature();

Likewise, if you need to implement RSA key exchange, you can easily make use of OAEP padding like this:

AndroidRsaEngine rsa = new AndroidRsaEngine("key1", false);

Digest digest = new SHA512Digest();
Digest mgf1digest = new SHA512Digest();
OAEPEncoding oaep = new OAEPEncoding(rsa, digest, mgf1digest, null);

oaep.init(true, null);
byte[] cipherText = oaep.processBlock(plainBytes, 0, plainBytes.length);

The sample application shows how to tie all of those APIs together and features an elegant and fully Holo-compatible user interface:



An added benefit of using hardware-backed keys is that, since they are not generated using Android's default SecureRandom implementation, they should not be affected by the recently announced SecureRandom vulnerability (of course, since the implementation is closed, we can only hope that trusted apps' RNG actually works...). However, Bouncy Castle's PSS and OAEP implementations do use SecureRandom internally, so you might want to seed the PRNG 'manually' before starting your app to make sure it doesn't start with the same PRNG state as other apps. The keystore daemon/service uses /dev/urandom directly as a source of randomness, when generating master keys used for key file encryption, so they should not be affected. RSA keys generated by the softkeymaster OpenSSL-based software implementation might be affected, because OpenSSL uses RAND_bytes() to generate primes, but are probably OK since the keystore daemon/service runs in a dedicated process and the OpenSSL PRNG automatically seeds itself from /dev/urandom on first access (unfortunately there are no official details about the 'insecure SecureRandom' problem, so we can't be certain).

Summary

Android 4.3 offers a standard SDK API for generating and accessing app-private RSA keys, which makes it easier for non-system apps to store their keys securely, without implementing key protection themselves. The new Jelly Bean also offers hardware-backed key storage on supported devices, which guarantees that even system or root apps cannot extract the keys. Protection against physical access attacks depends on the implementation, with most (all?) current implementations being TrustZone-based. Low-level RSA operations with key store managed keys are also possible, which enables apps to use cryptographic algorithms not provided by Android's built-in JCE providers.

Tuesday, 23 July 2013

Our previous posts were about code signing in Android, and they turned out to be surprisingly relevant with the announcement of the 'master key' code signing Android vulnerability. While details are yet to be formally released, it has been already patched and dissected, so we'll skip that one and try something different for a change. This post is not directly related to Android security, but will discuss some Android implementation details, so it might be of some interest to our regular readers. Without further ado, let's get closer to the metal than usual and build a wireless Android device (almost) from scratch.

Board introduction -- BeagleBone Black

For our device we'll use the recently released BeagleBone Black board. So what is a BeagleBone Black (let's call it BBB from now on), then? It's the latest addition to the ranks of ARM-based, single board credit-card-sized computers. It comes with an AM335x 1GHz ARM Cortex-A8 CPU, 512MB RAM,  2GB on-board eMMC flash, Ethernet, HDMI and USB ports, plus a whole lot of I/O pins. Best of all, it's open source hardware, and all schematics and design documents are freely available. It's hard to beat the price of $45 and it looks much, much better than the jagged Raspberry Pi. It comes with Angstrom Linux pre-installed, but can run pretty much any Linux flavour, and of course, Android. It is being used for anything from blinking LEDs to tracking satellites. You can hook it up to circuits you've build or quickly extend it using one of the many 'cape' plug-in boards available. We'll use a couple of those for our project, so 'building' refers mostly to creating an Android build compatible with our hardware. We'll detail the hardware later, but let's first outline some simple requirements for our mobile Android device:
  1. touch screen input
  2. wireless connectivity via WiFi
  3. battery powered
Here's what we start with:

Building a kernel for Android

Android support for AM335x-based devices is provided by the rowboat project. It integrates the required kernel and OS patches and provides build configurations for each of the supported devices, including the BBB. The latest version is based on Android 4.2.2 and if you want to get started quickly, you can download a binary build from TI's Sitara Android development kit page. All you need to do is flash it to an SD card, connect the BBB to an HDMI display and power it on. You will instantly get a fully working, hardware-accelerated Jelly Bean 4.2 device you can control using standard USB keyboard and mouse. If that is all you need, you might as well stop reading here. Our first requirement, however is a working touch screen, not an HDMI monitor, so we have some work to do. As it happens, a number of LCD capes are already available for the BBB (from circuitco and others), so those are our first choice. We opted for the LCD4 4.3" cape which offers almost reasonable resolution and is small enough to be directly attached to the BBB. Unfortunately it doesn't work with the rowboat build from TI. To understand why, let's take a step back and discuss how the BBB supports extension hardware, including capes.

Linux Device Tree and cape support

If you look at the expansion header pinout table in the BBB reference manual, you will notice that each pin can serve multiple purposes, depending on configuration. This is called 'pinmuxing' and is the method modern SoC's use to multiplex multiple peripheral functions to a limited set of physical pins. The AM335x CPU the BBB uses is no exception: it has pins with up to 8 possible peripheral functions. So, in order for a cape to work, the SoC needs to be configured to use the correct inputs/outputs for that cape. The situation becomes more complicated when you have multiple capes (up to 4 at a time). BBB capes solve this by using EEPROM that stores enough data to identify the cape, its revision and serial number. At boot time, the kernel identifies the capes by reading their EEPROMs, computes the optimal configuration (or outputs and error if the connected capes are not compatible) and sets the expansion header pinmux accordingly. Initially, this was implemented in a 'board file' in the Linux kernel, and adding a new cape required modifying the kernel and making sure all possible cape configurations were supported. Needless to say, this is not an easy task, and getting it merged into Linux mainline is even harder. Since everyone is building some sort of ARM device nowadays, the number of board files and variations thereof reached critical mass, and Linux kernel maintainers decided to decouple board specific behaviour from the kernel. The mechanism for doing this is called Device Tree (DT) and its goal is to make life easier for both device developers (no need to hack the kernel for each device) and kernel maintainers (no need to merge board-specific patches every other day). A DT is a data structure for describing hardware which is passed to the kernel at boot time. Using the DT, a generic board driver can configure itself dynamically. The BBB ships with a 3.8 kernel and takes full advantage of the new DT architecture. Cape support is naturally implemented using DT source (DTS) files and even goes a step further than mainline Linux by introducing a Cape Manager, an in-kernel mechanism for dynamically loading Device Tree fragments from userspace. This allows for runtime (vs. boot time) loading of capes via sysfs, resource conflict resolution (where possible), manual control over already loaded capes and more.

Going back to Android, the rowboat Android port is using the 3.2 kernel and relies on manual porting of extension peripheral configuration to the kernel board file. As it happens, support for our LCD4 cape is not there yet. We could try to patch the kernel based on the 3.8 DTS files, or take the plunge and attempt to run Android using 3.8. Since all BBB active development is going on in the 3.8 branch, using the newer version is the better (if more involved) choice.

Using the 3.8 kernel

As we know, Android adds a bunch of 'Androidisms' to the Linux kernel, most notably wakelocks, alarm timers, ashmem, binder, low memory killer and 'paranoid' network security. Thus you could not use a vanilla Linux kernel as is to run Android until recently, and a number of Android-specific patches needed to be applied first. Fortunately, thanks to the Android Mainlining Project, most of these features are already merged (in one form or another) in the 3.8 kernel and are available as staging drivers. What this means is that we can take a 3.8 kernel that works well on the BBB and use it run Android. Unfortunately, the BBB can't quite use a vanilla 3.8 kernel yet and requires quite a few patches (including Cape Manager). However, building a 3.8 kernel with all BBB patches applied is not too hard to do, thanks to instructions and build scripts by Robert Nelson. Even better, Andrew Henderson has successfully used it in Android and has detailed the procedure. Following Andrew's build instructions, we can create an Android build that has a good chance of supporting our touch screen. As Andrew's article mentions, hardware acceleration (support for the BBB's PowerVR SGX 530 GPU) is not yet available for the 3.8 kernel, so we need to disable it in our build. One thing that is missing from Andrew's instruction is that you also need to disable building and installing of the SGX drivers, otherwise Android will try to use them at boot and fail to start SurfaceFlinger due to driver-kernel module incompatibility. You can do this by commenting out the dependency on sgx in rowboat's top-level Makefile like this:

@@ -11,7 +13,7 @@
CLEAN_RULE = sgx_clean wl12xx_compat_clean kernel_clean clean
else
ifeq ($(TARGET_PRODUCT), beagleboneblack)
-rowboat: sgx
+#rowboat: sgx
CLEAN_RULE = sgx_clean kernel_clean clean
else
ifeq ($(TARGET_PRODUCT), beaglebone)


Note that the kernel alone is not enough though: the boot loader (Das U-Boot) needs to be able to load the (flattened) device tree blob, so we need to build a recent version of that as well. Android seems to run OK with this configuration, but there are still a few things that are missing. The first you might notice is ADB support.

ADB support

ADB (Android Debug Bridge) is one of the best things to came out of the Android project, and if you have been doing Android development in any form for a while, you probably take it for granted. It is a fairly complex piece of software though, providing support for debugging, file transfer, port forwarding and more and requires kernel support in addition to the Android daemon and client application. In kernel terms this is known as the 'Android USB Gadget Driver', and it is not quite available in the 3.8 kernel, even though there have been multiple attempts at merging it. We can merge the required bits from Google's 3.8 kernel tree, but since we are trying to stay as close as possible to the original BBB 3.8 kernel, we'll use a different approach. While attempts to get ADB in the mainline continue, Function Filesystem (FunctionFS) driver support has been added to Android's ADB and we can use that instead of the 'native' Android gadget. To use ADB with FunctionFS:
  1. Configure FunctionFS support in the kernel (CONFIG_USB_FUNCTIONFS=y):
    • Device Drivers -> USB Support -> 
      USB Gadget Support -> USB Gadget Driver -> Function Filesystem
  2. Modify the boot parameters in uEnv.txt to set the vendor and product IDs, as well as the device serial number
    • g_ffs.idVendor=0x18d1 g_ffs.idProduct=0x4e26 g_ffs.iSerialNumber=<serial>
  3. Setup the FunctionFS directory and mount it in your init.am335xevm.usb.rc file:
    • on fs
      mkdir /dev/usb-ffs 0770 shell shell
      mkdir /dev/usb-ffs/adb 0770 shell shell
      mount functionfs adb /dev/usb-ffs/adb uid=2000,gid=2000
  4. Delete all lines referencing /sys/class/android_usb/android0/*. (Those nodes are created by the native Android gadget driver and are not available when using FunctionFS.)
Once this is done, you can reboot and you should see your device using adb devices soon after the kernel has loaded. Now you can debug the OS using Eclipse and push and install files directly using ADB. That said, this won't help you at all if the device doesn't boot due to some kernel misconfiguration, so you should definitely get an FTDI cable (the BBB does not have an on-board FTDI chip) to be able to see kernel messages during boot and get an 'emergency' shell when necessary.

cgroups patch

If you are running adb logcat in a console and experimenting with the device, you will notice a lot of 'Failed setting process group' warnings like this one:

W/ActivityManager(  349): Failed setting process group of 4911 to 0
W/SchedPolicy( 349): add_tid_to_cgroup failed to write '4911' (Permission denied);

Android's ActivityManager uses Linux control groups (cgroups) to run processes with different priorities (background, foreground, audio, system) by adding them to scheduling groups. In the mainline kernel this is only allowed to processes running as root (EUID=0), but Android changes this behaviour (naturally, with a patch) to only require the CAP_SYS_NICE capability, which allows the ActivityManager (running as system in the system_server process) to add app processes to scheduling groups. To get rid of this warning, you can disable scheduling groups by commenting out the code that sets up /dev/cpuctl/tasks in init.rc, or you can merge the modified functionality form Google's experimental 3.8 branch (which we've been trying to avoid all along...).

Android hardware support

Touchscreen

We now have a functional Android development device running mostly without warnings, so it's time to look closer at requirement #1. As we mentioned, once we disable hardware acceleration, the LCD4 works fine with our 3.8 kernel, but a few things are still missing. The LCD4 comes with 5 directional GPIO buttons which are somewhat useful because scrolling on a resistive touchscreen takes some getting used to, but that is not the only thing they can be used for. We can remap them as Android system buttons (Back, Home, etc) by providing a key layout (.kl) file like this one:

key 105   BACK               WAKE
key 106 HOME WAKE
key 103 MENU WAKE
key 108 SEARCH WAKE
key 28 POWER WAKE

The GPIO keypad on the LCD identifies itself as 'gpio.12' (you can check this using the getevent command), so we need to name the layout file to 'gpio_keys_12.kl'. To achieve this we modify device.mk in the BBB device directory (device/ti/beagleboneblack):

...
# KeyPads
PRODUCT_COPY_FILES += \
$(LOCAL_PATH)/gpio-keys.kl:system/usr/keylayout/gpio_keys_12.kl \
...

Now that we are using hardware buttons, we might want to squeeze some more screen real estate from the LCD4 by not showing the system navigation bar. This is done by setting config_showNavigationBar to false in the config.xml framework overlay file for our board:

<bool name="config_showNavigationBar">false</bool>

While playing with the screen, we notice that it's a bit dark. Increasing the brightness via the display settings however does not seem to work. A friendly error message in logcat tells us that Android can't open the /sys/class/backlight/pwm-backlight/brightness file. Screen brightness and LEDs are controlled by the lights module on Android, so that's where we look first. There is a a hardware-specific one under the beagleboneblack device directory, but it only supports the LCD3 and LCD7 displays. Adding support for the LCD4 is simply a matter of finding the file that controls brightness under /sys. For the LCD4 it's called /sys/class/backlight/backlight.10/brightness and works exactly like the other LCDs -- you get or set the brightness by reading or writing the backlight intensity level (0-100) as a string. We modify light.c (full source on Github) to first try the LCD4 device and voila -- setting the brightness via the Android UI now works... not. It turns out the brightness file is owned by root and the Settings app doesn't have permission to write to it. We can change this permission in the board's init.am335xevm.rc file:

# PWM-Backlight for display brightness on LCD4 Cape
chmod 0666 /sys/class/backlight/backlight.10

This finally settles it, so we can cross requirement #1 off our list and try to tackle #2 -- wireless support.

WiFi adapter

The BBB has an onboard Ethernet port and it is supported out of the box by the rowboat build. If we want to make our new Android device mobile though, we need to add either a WiFi adapter or 3G modem. 3G support is possible, but somewhat more involved, so we will try to enable WiFi first. There are a number of capes that provide WiFi and Bluetooth for the original BeagleBone, but they are not compatible with the BBB, so we will try using a regular WiFi dongle instead. As long as it has a Linux driver, it should be quite easy to wire it to Android by following the TI porting guide, right?

We'll use a WiFi dongle from LM Technolgies based on the Realtek RTL8188CUS chipset which is supported by the Linux rtl8192cu driver. In addition to the kernel driver, this wireless adapter requires a binary firmware blob, so we need to make sure it's loaded along with the kernel modules. But before getting knee-deep into makefiles, let's briefly review the Android WiFi architecture. Like most hardware support in Android, it consists of a kernel layer (WiFi adapter driver modules), native daemon (wpa_supplicant), HAL (wifi.c in libharware_legacy, communicates with wpa_supplicant via its control socket), a framework service and its public interface (WifiService and WifiManager) and application/UI ('WiFi' screen in the Settings app, as well as SystemUI, responsible for showing the WiFi status bar indicator). That may sound fairly straightforward, but the WifiService implements some pretty complex state transitions in order to manage the underlying native WiFi support. Why is all the complexity needed? Android doesn't load kernel modules automatically, so the WifiStateMachine will try to load kernel modules, find and load any necessary firmware, start the wpa_supplicant daemon, scan for and connect to an AP, obtain an IP address via DHCP, check for and handle captive portals, and finally, if you are lucky, set up the connection and send out a broadcast to notify the rest of the system of the new network configuration. The wpa_supplicant daemon alone can go through 13 different states, so things can get quite involved when those are combined.

Going step-by-step through the porting guide, we first enable support for our WiFi adapter in the kernel. That results in 6 modules that need to be loaded in order, plus the firmware blob. The HAL (wifi.c) can only load a single module though, so we pre-load all modules in the board's init.am335xevm.rc and set the wlan.driver.status to ok in order to prevent WifiService from trying (and failing) to load the kernel module. We then define the wpa_supplicant and dhcpd services in the init file. Last, but not least, we need to set the wifi.interface property to wlan0, otherwise Android will silently try to use a test device and fail to start the wpa_supplicant. Both properties are set as PRODUCT_PROPERTY_OVERRIDES in device/ti/beagleboneblack/device.mk (see device directory on Github). Here's how the relevant part from init.am335xevm.rc looks like:

on post-fs-data
# wifi
mkdir /data/misc/wifi/sockets 0770 wifi wifi
insmod /system/lib/modules/rfkill.ko
insmod /system/lib/modules/cfg80211.ko
insmod /system/lib/modules/mac80211.ko
insmod /system/lib/modules/rtlwifi.ko
insmod /system/lib/modules/rtl8192c-common.ko
insmod /system/lib/modules/rtl8192cu.ko

service wpa_supplicant /system/bin/wpa_supplicant \
-iwlan0 -Dnl80211 -c/data/misc/wifi/wpa_supplicant.conf \
-e/data/misc/wifi/entropy.bin
class main
socket wpa_wlan0 dgram 660 wifi wifi
disabled
oneshot

service dhcpcd_wlan0 /system/bin/dhcpcd -ABKL
class main
disabled
oneshot

service iprenew_wlan0 /system/bin/dhcpcd -n
class main
disabled
oneshot


In order to build the wpa_supplicant daemon, we then set BOARD_WPA_SUPPLICANT_DRIVER and WPA_SUPPLICANT_VERSION in device/ti/beagleboneblack/BoardConfig.mk. Note the we are using the generic wpa_supplicant, not the TI-patched one and the WEXT driver instead of the NL80211 one (which requires a proprietary library to be linked in). Since we are preloading driver kernel modules, we don't need to define WIFI_DRIVER_MODULE_PATH and WIFI_DRIVER_MODULE_NAME.

BOARD_WPA_SUPPLICANT_DRIVER      := WEXT
WPA_SUPPLICANT_VERSION := VER_0_8_X
BOARD_WLAN_DEVICE := wlan0

To make the framework aware of our new WiFi device, we change networkAttributes and radioAttributes in the config.xml overlay file. Getting this wrong will lead to Android's ConnectionManager totally ignoring WiFi even if you manage to connect and will result in the not too helpful 'No network connection' message. "1" here corresponds to the ConnectivityManager.TYPE_WIFI connection type (the built-in Ethernet connection is "9", TYPE_ETHERNET).

<string-array name="networkAttributes" translatable="false">
...
<item>"wifi,1,1,1,-1,true"</item>
...
</string-array>
<string-array name="radioAttributes" translatable="false">
<item>"1,1"</item>
...
</string-array>

Finally, to make Android aware of our newly found WiFi features, we copy android.hardware.wifi.xml to /etc/permissions/ by adding it to device.mk. This will take care of enabling the Wi-Fi screen in the Settings app:

PRODUCT_COPY_FILES := \
...
frameworks/native/data/etc/android.hardware.wifi.xml:system/etc/permissions/android.hardware.wifi.xml \
...

After we've rebuild rowboat and updated the root file system, you should be able to turn on WiFi and connect to an AP. Make sure you are using an AC power supply to power the BBB, because the WiFi adapter can draw quite a bit of current and you may not get enough via the USB cable. If the board is not getting enough power, you might experience failure to scan, dropping connections and other weird symptoms even if your configuration is otherwise correct. If WiFi support doesn't work for some reason, check the following:
  • that the kernel module(s) and firmware (if any) is loaded (dmesg, lsmod)
  • logcat output for relevant-lookin error messages
  • that the wpa_supplicant service is defined properly in init.*.rc and the daemon is started
  • that /data/misc/wifi and wpa_supplicant.conf are available and have the right owner and permissions (wifi:wifi and 0660)
  • that the wifi.interface and wlan.driver.status properties are set correctly
  • use your debugger if all else fails
That was easy, right? We now have a working wireless connection, it's time to think about requirement #3, powering the device.

Battery power

The BBB can be powered in three ways: via the miniUSB port, via the 5V AC adapter jack, and by using the power rail (VDD_5V) on the board directly. We can use any USB battery pack that provides enough current (~1A) and has enough capacity to keep the device going by simply connecting it to the miniUSB port. Those can be rather bulky and you will need an extra cable, so let's look for other options. As can be expected, there is a cape for that. The aptly named Battery Cape plugs into the BBB's expansion connectors and provides power directly to the power rail. We can plug the LCD4 on top of it and get an integrated (if a bit bulky) battery-powered touchscreen device. The Battery Cape holds 4 AA batteries connected as two sets in parallel. It is not simply a glorified battery holder though -- it has a boost converter that can provide stable 1A current at 5V even if battery voltage fluctuates (1.8-5.5V). It does provide support for monitoring battery voltage via AIN4 input, but does not have a 'fuel gauge' chip so we can't display battery level in Android without adding additional circuitry. That is ways our mobile device cannot display the battery level (yet) and unfortunately won't be able to shut itself down when battery levels become critically low. That is something that definitely needs work, but for now we make the device always believe it's at 100% power by setting the hw.nobattery property to true. The alternative is to have it display the 'low battery' red warning icon all the time, so this approach is somewhat preferable. Four 1900 mAh batteries installed in the battery cape should provide enough power to run the device for a few hours even when using WiFi, so we can (tentatively) mark requirement #3 as fulfilled.

Flashing the device

If you have been following Andrew Henderson's build guide linked above, you have been 'installing' Android on an SD card and booting the BBB from it. This works fine and makes it easy to fix things when Android won't load by simply mounting the SD card on your PC and editing or copying the necessary files. However, most consumer grade SD cards don't offer the best performance and can be quite unreliable. As we mentioned at the beginning of the post, the BBB comes with 2GB of built-in eMMC, which is enough to install Android and have some space left for a data partition. On most Android devices flashing can be performed by either booting into the recovery system or by using the fastboot tool over USB. The rowboat build does not have a recovery image, and while fastboot is supported by TI's fork of U-Boot, the version we are using to load the DT blob does not support fastboot yet. That leaves booting another OS in lieu of a recovery and flashing the eMMC form there, either manually or by using an automated flasher image. The flasher image simply runs a script at startup, so let's see how it works by doing it manually first. The latest BBB Angstrom bootable image (not the flasher one) is a good choice for our 'recovery' OS, because it is known to work on the BBB and has all the needed tools (fdisk, mkfs.ext4, etc.). After you dd it to an SD card, mount the card on your PC and copy the Android boot files and rootfs archive to an android/ directory. You can then boot from the SD card, get a root shell on the Angstrom and install Android to the eMMC from there.

Android devices typically have a boot, system and userdata parition, as well as a recovery one and optionally others. The boot partition contains the kernel and a ramdisk which gets mounted at the root of the device filesystem. system contains the actual OS files and gets mounted read-only at /system, while userdata is mounted read-write at /data and stores system and app data, as well user-installed apps. The partition layout used by the BBB is slightly different. The board ootloader will look for the first stage bootloader (SPL, named MLO in U-Boot) on the first FAT partition of the eMMC. It in turn will load the second state bootloader (u-boot.img) which will then search for a OS image according to its configuration. On embedded devices U-Boot configuration is typically stored as a set of variables in NAND, replaced by the uEnv.txt file on devices without NAND such as the BBB. Thus we need a FAT boot partition to host the SPL, u-boot.img, uEnv.txt, the kernel image and the DT blob. system and userdata will be formatted as EXT4 and will work as in typical Android devices.

The default Angstrom installations creates only two partitions -- a DOS one for booting, and a Linux one that hosts Angstrom Linux. To prepare the eMMC for Android, you need to delete the Linux partition and create two new Linux partitions in its place -- one to hold Android system files and one for user data. If you don't plan to install too many apps, you can simply make them equal sized. When booting from the SD card, the eMMC device will be /dev/block/mmcblk1, with the first partition being /dev/block/mmcblk1p1, the second /dev/block/mmcblk1p2 and so on. After creating those 3 partitions with fdisk we format them with their respective filesystems:

# mkfs.vfat -F 32 -n boot /dev/block/mmcblk1p1
# mkfs.ext4 -L rootfs /dev/block/mmcblk1p2
# mkfs.ext4 -L usrdata /dev/block/mmcblk1p3

Next, we mount boot and copy boot related files, then mount rootfs and untar the rootfs.tar.bz2 archive. usrdata can be left empty, it will be populated on first boot.

# mkdir -p /mnt/1/
# mkdir -p /mnt/2/
# mount -t vfat /dev/block/mmcblk1p1 /mnt/1
# mount -t ext4 /dev/block/mmcblk1p2 /mnt/2
# cp MLO u-boot.img zImage uEnv.txt am335x-boneblack.dtb /mnt/1/
# tar jxvf rootfs.tar.bz2 -C /mnt/2/
# umount /mnt/1
# umount /mnt/2

With this, Android is installed on the eMMC and you can shutdown the 'recovery' OS, remove the SD card and boot from the eMMC. Note that the U-Boot used has been patched to probe whether the SD card is available and will automatically boot from it (without you needing to hold the BBB's user boot button), so if you don't remove the 'recovery' SD card, it will boot again.

We now have a working, touch screen Android device with wireless connectivity. Here's how it looks in action:

Our device is unlikely to win any design awards or replace your Nexus 7, but it could be used as the basis of  dedicated Android devices, such as a wireless POS terminal or a SIP phone and extended even further by adding more capes or custom hardware as needed.

Summary

The BBB is fully capable of running Android and by adding off-the shelf peripherals you can easily turn it into a 'tablet' (of sorts) by adding a touch screen and wireless connectivity. While the required software is mostly available in the rowboat project, if you want to have the best hardware support you need to use BBB's native 3.8 kernel and configure Android to use it. Making hardware fully available to the Android OS is mostly a matter of configuring the relevant HAL bits properly, but that is not always straightforward, even with board vendor provided documentation. The reason for this is that Android subsystems are not particularly cohesive -- you need to modify multiple, sometimes seemingly unrelated, files at different locations to get a single subsystem working. This is, of course, not specific to Android and is the price to pay for building a system by integrating originally unrelated OSS projects. On the positive side, most components can be replaced and the required changes can usually be confined to the (sometimes loosely defined) Hardware Abstraction Layer (HAL). 

Popular Posts

Visitors

Sample Text

About

Social Profiles

Featured Posts Coolbthemes