|Highlights of DEFCON 22||August 13, 2014|
The twenty-second DEFCON took over Las Vegas last week, and brought many interesting and notable speakers. I took a few notes from the talks that stood out to me, and I'm passing them along here.
Paul Vixie, Internet pioneer and DNS expert. Vixie spoke about his DNSDB project for accumulating global DNS resource records in a passive fashion, and making this information available to researchers and security product vendors. He also spoke about his DNS firewall for shielding users from malicious throwaway domain names.
Phil Zimmerman, creator of PGP and president of Silent Circle. Zimmerman spoke about wiretap overcompliance in the telecommunications industry, trust in cryptographic techniques, and his new endeavors at Silent Circle. Reading about Zimmerman's PGP efforts and the resulting drama (PGP: Pretty Good Privacy, Simson Garfinkel) is what got me interested in cryptography many years ago, so it was great to see a living legend on the stage. I did take issues with a few of his comments, though. When asked about trusting binary executables, Zimmerman mentioned the problem of distributing a binary which is identical to one that might be produced from source, due to differences in timestamps — and failed to discuss recent progress in reproducible build techniques which are meant to solve that problem. He also painted a somewhat rosy picture of the legislative attitude towards cryptography and privacy: we won the Crypto Wars in the 1990's, and cryptographic freedom can't be rolled back again now that everyone relies on it. This does not seem to be the case — last year, Congress and the administration was pushing a proposal which would effectively outlaw peer-to-peer communication systems that might be problematic to wiretap. (Thankfully, the Snowden revelations made the proposal politically toxic for now, and it has been shelved.)
Kenneth White, security researcher. White spoke about the Open Crypto Audit project which he launched along with cryptographer Matthew Green, and the drama caused by their first audit subject, TrueCrypt, being suddenly discontinued under mysterious circumstances. I've followed the progress of the Open Crypto Audit project and the ongoing news about the TrueCrypt disappearance, so there wasn't much in the talk that was new to me. It was interesting to hear that some of the biggest challenges of Open Crypt Audit were the community aspects of audit fundraising. White reported that they will finish the TrueCrypt audit in spite of the shutdown, and then move on to OpenSSL.
Dan Kaminsky, security researcher. Kaminsky scored a coveted two-hour slot in the Penn and Teller theater, which he fully used to discuss a variety of topics:
- Secure random by default. Kaminsky argued that most vulnerabilities resulting from random number generation are not due to exotic attacks on complex algorithms, but rather gross missteps in the use and generation of randomness. For instance, some software has been observed to only effectively use 32 bits of entropy, while others employ the use of linear feedback shift registers (LFSRs) in spite of their easy cryptanalysis. Kaminsky proposes a new Liburandy library which wraps /dev/urandom when appropriate.
- Storybits. Kaminsky invited Ryan Castellucci onto the stage to demonstrate Storybits 0.1, a new cryptomnemonic scheme for people to remember binary strings such as keys, fingerprints, secrets, etc. The system encodes the data as adjective-noun-verb tuples to make the data easier to remember, and provide error correction by way of spellcheck auto-correct.
- Memory hardening. Convinced that improper memory usage is a major cause of vulnerabilities, Kaminsky outlined several strategies for memory-hardening applications. These include use of a typed heap (as Google does in Chrome), the use of nondeterministic freeing (as Microsoft does in Internet Explorer), and a novel approach called IronHeap where 64-bit virtual memory addresses are simply never freed (although pages may be returned for MMU reuse). He also announced the formation of a team to memory-harden Firefox, to provide added security for the Tor Browser Bundle.
- Distributed Denial of Service (DDoS) mitigation. Kaminsky considers the rise of DDoS attacks using techniques such as datagram amplification to be an existential threat to the Internet. He proposes a new scheme of sending tracer packets within data flows to indicate when source address spoofing may be happening.
- NSA. Kaminsky is concerned that the NSA backlash may lead to a balkanization of the Internet, as various nations opt to develop their own internal systems for core Internet services.
- Apple bug bounties. Finally, Kaminsky is quite happy that Apple is offering bug bounties relating to Safari autoredirection.
Kaminsky's slides are available.
Ladar Levison, founder of Lavabit. Levison spoke about his proposed Dark Mail Alliance, a new electronic mail system designed to preserve the privacy of users. He began by announcing a new name for the project: DIME, the Dark Internet Mail Environment. I was a bit disappointed in the new name — "Dark" can have a sinister connotation for some people, and privacy preserving technologies should be marketed to the public with positive names reflecting the true value they provide. He should have renamed the project TIME, the Trustworthy Internet Mail Environment. Levison outlined the basic components of the system, including a server called Magma and a modified Thunderbird client called Volcano. DIME unfortunately does not provide forward secrecy for messages, although Levison pointed out that there was forward secrecy at the TLS1.2 line level. There was also talk of a pseudo-onion scheme to shield metadata and provide some small measure of anonymity, but it wasn't clear to me how this was implemented.
Adam Caudill, software developer and security researcher. In DEFCON's new Crypto Village, Caudill proposed a new secure electronic mail system called Simple Messaging and Identity Management Protocol (SMIMP). This scheme shares some of the same goals as Levison's DIME, but provides an alternative design intended to be developed in the open among the greater Internet engineering community. The most interesting thing to me was a Hashcash-like proof-of-work requirement for reducing spam.
|Recent Android "Package file is invalid" errors||August 10, 2013|
In the past day or so, I've been noticing these "Package file is invalid" errors on my Android devices while trying to upgrade or install certain packages from the Play Store. A bit of searching revealed that many others are having this problem, and various home remedies abound for trying to fix it, such as clearing the Play Store's app cache. Unfortunately, while these remedies may have worked for past problems that led to this error message being displayed, they are useless when trying to fix the issue people are experiencing this weekend.
I decided to do a bit of digging, and I found that Google's web servers are actually sending corrupted packages to the Play Store app. Therefore, no amount of tweaking your device will fix the problem. (Unless such tweaking happens to result in pulling packages from a different web server that doesn't have corrupted files, I suppose.)
UPDATE 2013-08-12: It appears that this problem is isolated to one or more specific servers on Google Play's content distribution network -- if your closest server has corruption, you'll always see this issue unless you move to a different network and a different server is selected. I see the problem here in Colorado, and a brief Twitter survey shows a high concentration of complaints from the U.S. Midwest and Great Lakes region. Suggestions to use a VPN have some merit -- when I VPN into Dallas, I can successfully update/install these problematic packages, because a non-corrupted server is chosen in that case. (Obviously this isn't a great solution.)
UPDATE 2013-08-13: I heard from a Google Play engineer today. It sounds like they're in the process of rolling out a fix, so our package updates and installs should be back to normal very soon!
I've observed this problem on the following devices:
- Galaxy Nexus (Android 4.2)
- Nexus 10 (Android 4.3)
To investigate the problem, I tried downloading the recently released Twitter 4.1.4 package, and compared the downloaded package file (temporarily stored in /data/data/com.android.providers.downloads/cache/downloadfile.apk) to a known good version.
A hex dump of an uncorrupted Twitter 4.1.4 package looks like this around offset 0x0200000:
01fffc0: 6e69 2067 6fcc 8872 6d65 6b2e 0028 2b42 ni go..rmek..(+B 01fffd0: 6972 2069 6e73 616e 206d c4b1 73c4 b16e ir insan m..s..n 01fffe0: 2079 6f6b 7361 2062 6972 2062 696c 6769 yoksa bir bilgi 01ffff0: 7361 7961 7220 6dc4 b13f 000c 0c42 6f79 sayar m..?...Boy 0200000: 7574 3a20 252e 3166 6b00 0f11 4b6f 6e75 ut: %.1fk...Konu 0200010: 6d75 2064 65c4 9f69 c59f 7469 7200 0303 mu de..i..tir... 0200020: 5369 6c00 2122 2225 3124 7322 2022 2532 Sil.!""%1$s" "%2 0200030: 2473 2220 6c69 7374 6573 696e 6920 6f6c $s" listesini ol
A hex dump of the corrupted Twitter apk looks like this around offset 0x0200000:
01fffc0: 6e69 2067 6fcc 8872 6d65 6b2e 0028 2b42 ni go..rmek..(+B 01fffd0: 6972 2069 6e73 616e 206d c4b1 73c4 b16e ir insan m..s..n 01fffe0: 2079 6f6b 7361 2062 6972 2062 696c 6769 yoksa bir bilgi 01ffff0: 504b 0304 1400 0800 0800 e27c 0543 2d70 PK.........|.C-p 0200000: 8d5b c420 0100 986f 0200 1d00 0400 6173 .[. ...o......as 0200010: 7365 7473 2f66 6f6e 7473 2f52 6f62 6f74 sets/fonts/Robot 0200020: 6f2d 4c69 6768 742e 7474 66fe ca00 00ec o-Light.ttf..... 0200030: 9d07 7c54 55fa f74f 994c 0a21 bd00 8190 ..|TU..O.L.!....
At 16 bytes before the 2-megabyte mark, the corrupted file begins repeating the contents of the beginning of the file, including the ZIP header. It looks like a common programming error when dealing with buffered I/O streams. I first suspected that the Play Store app or the Android framework on my devices had such an error, but then I used tcpdump to examine the actual HTTP traffic as seen from my router:
GET http://r15---sn-qxo7sn7s.c.android.clients.google.com/market/GetBinary/com.twitter.android/420?... 22:01:25.861259 IP 74.125.x.x.80 > 192.168.x.x.39431: Flags [.], seq 2097056:2098516, ack 527, win 245, length 1460 ... 0x0230: 2073 cca7 6966 7265 6e69 2067 6fcc 8872 .s..ifreni.go..r 0x0240: 6d65 6b2e 0028 2b42 6972 2069 6e73 616e mek..(+Bir.insan 0x0250: 206d c4b1 73c4 b16e 2079 6f6b 7361 2062 .m..s..n.yoksa.b 0x0260: 6972 2062 696c 6769 504b 0304 1400 0800 ir.bilgiPK...... 0x0270: 0800 e27c 0543 2d70 8d5b c420 0100 986f ...|.C-p.[.....o 0x0280: 0200 1d00 0400 6173 7365 7473 2f66 6f6e ......assets/fon 0x0290: 7473 2f52 6f62 6f74 6f2d 4c69 6768 742e ts/Roboto-Light. 0x02a0: 7474 66fe ca00 00ec 9d07 7c54 55fa f74f ttf.......|TU..O
Sure enough, the corruption was present in the stream as sent from Google's web server. I assume that the bug is in Google's web server code, or in some intermediate package processing step at the Play Store. Either way, we'll just have to wait for Google to fix the glitch.
|Google Fiber Tourism: Plugging into the glass at the Kansas City Hacker House||December 28, 2012|
While finishing up my holiday travel, I decided to stop in for a couple of days at the Kansas City Hacker House, a place for aspiring technology entrepreneurs to live and work on their projects while connected to the Google Fiber gigabit network. Unlike my previous Google Fiber experience, I had an opportunity to plug my laptop directly into the network via gigabit ethernet and run some more tests.
I first ran a few tests of legacy network usage -- uploading and downloading large files from various services.
|test||file size||time||effective bitrate|
|Google Drive - upload||256MB||400 seconds||5.3687 Mbps|
|Google Drive - download||256MB||289 seconds||7.4307 Mbps|
|Dropbox - upload||256MB||31.7 seconds||67.655 Mbps|
|Dropbox - download||256MB||67.6 seconds||31.779 Mbps|
|Ubuntu 12.10 (mirror.anl.gov)||753.293MB||61.4 seconds||102.86 Mbps|
|Ubuntu 12.10 (bittorrent)||753.293MB||342 seconds||18.477 Mbps (peak 31.932)|
|Linux Mint 12 (bittorrent; 72/325 seeds)||1027.468MB||283 seconds||30.456 Mbps|
It looks like Google Drive wasn't having a good day. Dropbox, on the other hand, really screamed. (Although not as much as you might expect on a gigabit connection.) It was nice to be able to download Ubuntu in 61 seconds from a well-connected server. Bittorrent didn't perform well, though -- I suspect you'd need to be downloading a much larger file from many more seeds before you'd see Bittorrent have time to ramp up the connections and compare favorably.
All tests were performed to and from a local ramdisk, to avoid any hard drive I/O bottlenecks. However, the remote servers are likely using spinning disks that are contending with many other users.
The Speedtest.net tests really aren't very useful for Google Fiber, since the servers aren't really set up for measuring high-bandwidth connections. You really end up measuring the server's capabilities and the throughput of various intermediate networks. Nevertheless, here are a couple of tests:
I tested with several other Speedtest.net servers, and all the results varied too much to be useful.
Google Fiber Speed Test
To provide users with a reliable way of measuring the bandwidth to their home, Google provides a Google Fiber Speed Test for testing the connection from the home to a server on the Google Fiber network. (Google Fiber customers can access the server, but it doesn't appear to be accessible from the outside.)
The primary differences between Google's speed tests and the other speed tests seem to be:
- Google's server is located on the Google Fiber network in Kansas City, a mere 4 hops and 1.337ms of latency away from Google Fiber customers. This means that the Google Fiber Speed Test can more accurately measure the capability of a customer's last-mile link. (This also means it's perhaps less useful as a test for measuring access to resources outside of Kansas City.)
- The server is presumably provisioned well enough to handle tests from gigabit customers.
- Google's test opens a large number of simultaneous connections -- as many as 64 from my tcpdump observations. This may help with issues related to TCP window size, and possibly mitigate the negative effects of TCP congestion control should one of the connections miss a packet.
Google Fiber has considerably increased their peering arrangements since my last visit. They seem to have good peering with the following networks that I noticed:
- Level 3 - Chicago
- XO - Chicago
- Facebook (Rackspace ORD1) - Chicago
- Inteliquent - Chicago
- Kansas Research and Education Network (KanREN) - Kansas City
- Level 3 - Dallas
- Level 3 - Denver
- Comcast (Equinix Great Oaks) - San Jose
- Level 3 - San Jose
- Amazon - San Francisco
- Google - various
(Who knew that Facebook even had their own nationwide network? If you see tfbnw.net addresses in your traceroutes, the tfbnw stands for "the facebook network".)
IPv6 seems to be functioning properly, according to various online testers. (I did have some issues reaching my 6to4-connected home network via IPv6, for some reason.)
The file transfer tests -- old-fashioned "move this big file from one hard drive on the network to some other hard drive" -- are probably not the best tests of a next-generation gigabit service such as Google Fiber. Nor are most other "download" applications. (What's the point of being able to download four seasons of Breaking Bad in 3 minutes, when it takes 30 hours to watch?) Ultimately, unlocking the true potential of home gigabit connections will rely on the development of new and interesting applications. I predict a lot of live media, immersive telepresence, and rich collaboration applications will arise from this experiment.
Thanks to Ben Barreth and the residents of the Hacker House for having me over!
|Hanging out on the job: Using Google Hangouts for collaborative telepresence||November 20, 2012|
As a work-from-home software engineer, I'm always looking for ways to improve communication with co-workers and clients to help bridge the distance gap. At the beginning of October, a colleague and I decided to devote the month to an extreme collaboration experiment we called Maker's Month. We had been using Google Hangouts for meetings with great effectiveness, so we asked ourselves: Why not leave a hangout running all day, to provide the illusion of working in the same room? To that end, we decided to take our two offices -- separated spatially by 1,000 miles -- and merge them into one with the miracle of modern telecommunications.
We began by establishing some work parameters: We would have a meeting every morning to discuss the goals of the day, then mute our microphones for most of the next 6 to 7 "core office hours" while the hangout was left running. During the day we could see each other working, ask questions, engage in impromptu integration sessions, and generally pretend like we were working under the same roof. At the end of the day, we would have another meeting to discuss our accomplishments, adjust the project schedule, and set goals for the following day. We would then adjourn the hangout and work independently in "offline" mode.
There were a handful of questions we were hoping to answer during the course of this experiment:
- How much bandwidth would this telepresence cost, in terms of both instantaneous bitrate and total data usage?
- What audio/video gear would give us the best experience, and help avoid the usual trouble areas? (Ad-hoc conferencing setups are notorious for annoying glitches such as remote echo.)
- Would Google even allow us to keep such long-duration hangouts running, or to use such a large number of hangout-hours in a month? (Unlike peer-to-peer protocols such as RTP/WebRTC/etc., hangout media streams are actually switched in the cloud and consume the CPU/bandwidth resources of Google.)
- Do extended telepresence sessions provide real value to software development teams?
While Google Hangouts supports up to nine people in a hangout, our experiment only involved two people. (Our initial plans to bring a third team member into the hangout never materialized.)
This wouldn't be a proper Caffeinated Bitstream post without some graphs and figures, so here are some charts showing the overall bandwidth usage:
The first chart shows the bandwidth usage of a typical two-person hangout session, which uses about 750-1000 kbps in each direction (when the connection settings are configured for "fast connection"). The aberrations in the chart are due to changing hangout parameters (i.e. screen sharing instead of video, or the remote party dropping off.) The second chart shows the bandwidth usage for my house during the month of October. The hangout sessions are likely the bulk of this usage, but it also includes occasional movie streaming, Ubuntu downloads, software updates, and such. I sometimes hear people comment that the bandwidth caps imposed by some internet service providers can't be exceeded by legitimate use of the network, but I can easily imagine many telepresence scenarios that would quite legitimately push users over the limit. Fortunately, our usage is fairly modest, and my provider doesn't impose caps, anyway.
My hangout hardware consists of:
- A desktop computer with a quad-core Core i7 920 2.67Ghz processor and 8GB of RAM, running Ubuntu Linux
- A dedicated LCD monitor
- A Logitech HD Pro Webcam C910
- A Blue Yeti microphone
- A stereo system with good speakers, for audio output.
I've occasionally run Google Hangouts on my mid-2010 MacBook Pro, but the high CPU usage eventually revs up the fan to an annoying degree. The desktop computer doesn't seem to noticeably increase its fan noise, although I do have it tucked away in a corner. I've found that having a dedicated screen for the hangout really helps the telepresence illusion. The Yeti microphone is awesome, but the C910's built-in microphone is also surprisingly great. In fact, my colleague can't tell much of a difference between the two. I've noticed that the use of some other (perhaps sub-standard) microphones seems to thwart the echo cancellation built-in to Google Hangouts, resulting in echo that makes it almost impossible to carry on a conversation.
In addition to its thirst for bandwidth, Google Hangouts also demands a hefty chunk of processor time (and thus, power usage) on my equipment:
|system||cpu usage||quiescent power||hangout power||hangout power increase|
|4-core Core i7 920 2.67Ghz desktop||62%||75W||80W||5W|
|2-core Core i7 2.66Ghz mid-2010 MacBook Pro||77%||13W||38W||25W|
(Note: CPU usage is measured such that full usage of a single core is 100%. The usage is the sum of various processes related to delivering the hangout experience. On Linux: GoogleTalkPlugin, pulseaudio, chrome, compiz, Xorg. On Mac: GoogleTalkPlugin, Google Chrome Helper, Google Chrome, WindowServer, VDCAssistant. Power was measured with an inline Kill A Watt meter.)
I figure that using my desktop machine for daily hangouts has a marginal electrical cost of around $0.06/month. (Although keeping this desktop running without suspending it is probably costing me around $4.74/month.) Changing the hangout settings to "slow connection" roughly reduces the CPU usage by half.
Why does Google Hangouts use so much CPU and bandwidth? I think it all comes down to the use of H.264 Scalable Video Coding (SVC), a bitrate peeling scheme where the video encoder actually produces multiple compressed video streams at different bitrates. The higher-bitrate streams are encoded relative to information in the lower-bitrate streams, so the total required bitrate is fortunately much less than the sum of otherwise independent streams, but it is higher than a single stream. The "video switch in the cloud" operated by Google (or perhaps Vidyo, the provider of the underlying video technology) can determine the bandwidth capacity of the other parties and peel away the high-bitrate layers if necessary. Unfortunately, not only does SVC somewhat increase the bandwidth requirements, but it also means that the Google Talk Plugin cannot leverage any standard H.264 hardware encoders that may be present on the user's computer. Thus, a software encoder is used and the CPU usage is high. The design decision to use SVC probably pays off when three people or more are using a hangout.
One downside to using Google Hangouts for extended telepresence sessions is the periodic "Are you still there?" prompt, which seems to appear roughly every 2.5 hours. If you don't answer in the affirmative, you will be dropped from the hangout after a few minutes. Sometimes when I've stepped out of the office for coffee, I'll miss the prompt and get disconnected. I understand why Google does this, though, and reconnecting to the same hangout is pretty easy. Even with our excessive use of Google Hangouts, we haven't encountered any other limits to the service.
Video conferencing has always offered some obvious communication advantages, and Google Hangouts is no exception. The experience is much better than talking on the phone, as body language can really help convey meaning. In many ways, it does help close the distance gap and simulate being in the same room: team members can show artifacts (such as devices and mobile phone apps) and see at a glance if other team members are present, absent, working hard on a problem, or perhaps available for interruption. We made heavy use of the screen sharing feature, and even took advantage of the shared YouTube viewing on several occasions. We didn't engage in pair programming in this experiment, although remote pair programming is not unheard of. The biggest benefit of telepresence for geographically distributed teams seems to be keeping team members focused and engaged, as being able to see other team members working can be a source of motivation.
For me, the biggest downside to frequent use of Google Hangouts is the "stream litter" problem: Every hangout event appears in your Google+ stream forever, unless you manually delete it. While it's only visible to the hangout participants, it's really annoying to have to sift through a hundred hangout events while I'm looking for an unrelated post in my Google+ stream. Also, it's sometimes awkward when I want to share the screen from my work computer while using a different computer for the hangout. I end up joining the hangout a second time from my work computer, only to have nasty audio feedback ensue until I mute the microphone and speaker.
I think that using Google Hangouts for extended work sessions adds a lot of value, and I'll continue to use it. It would be interesting to try other video conferencing solutions to see how they compare.
For the impatient people who just scrolled down to "Conclusions" right away, here's the tl;dr:Pros:
- Continuous visual of other team members increases the opportunities for impromptu discussions and helps motivation.
- The "same room" illusion helps close the distance gap associated with telework.
- Good quality audio and video.
- Easily accessible from GMail or Google+.
- Screen sharing.
- Shared YouTube viewing.
- Relatively high (but manageable) bandwidth and CPU requirements.
- Google+ stream littered with hangout events.
- 2.5-hour "Are you still there?" prompt.
- When eating doughnuts in front of team members, can't offer some for everyone.
|A quick survey of C++11 feature support||October 6, 2012|
I recently conducted a quick-and-dirty survey of C++11 (formerly known as C++0x) features available on various platforms and compilers that I had lying around. My testing was not authoritative nor rigorous. (For example, g++ without -std=c++0x actually compiles lambdas without throwing an error, so I marked it as supported even though it does give a stern warning.) I'm posting the results here, mostly for my own future reference.
|Mac OS 10.6 / Xcode 4.2
gcc version 4.2.1
Apple clang version 3.0
gcc version 4.6.3
Ubuntu clang version 3.0-6ubuntu3
|g++||clang++||clang++ -std=c++0x||g++||g++ -std=c++0x||clang++||clang++ -std=c++0x||cl.exe /clr|
|omit space in nested template ">>"||X||X||X||X|
|for range (foreach)||X||X||X||X||X|
|move semantics (std::move)||X||X|
|raw string literals||X||X|
|encoded string literals||X||X|
|new function declaration style||X||X||X||X|
|can autodetect need for std::tr1||X||X||X||X||X||X||X||X|
Other, probably more thorough information about C++11 feature support:
- C++0xCompilerSupport (Apache's C++ Standard Library Wiki)
- Summary of C++11 Feature Availability in gcc and MSVC (Scott Meyers)
- C++0x Core Language Features In VC10: The Table (Microsoft)
My quick-and-dirty test suite is available for download.
UPDATE 2013-05-27: More recent platforms and compilers, below...
|Mac OS 10.8 / Xcode 4.6.2
gcc version 4.2.1
Apple clang version 3.3
gcc version 4.7.3
Ubuntu clang version 3.2-1~exp9ubuntu1
|clang++||clang++ -std=c++11||g++||g++ -std=c++11||clang++||clang++ -std=c++11|
|omit space in nested template ">>"||X||X||X|
|for range (foreach)||X||X||X||X||X|
|move semantics (std::move)||X||X|
|raw string literals||X||X||X|
|encoded string literals||X||X||X|
|new function declaration style||X||X||X|
|can autodetect need for std::tr1||X||X||X||X||X|
|Nest Learning Thermostat: Installation, battery issues, and the importance of the "C" wire||August 30, 2012|
My furnace's control board. The "C" terminal has no connection to the thermostat in this picture. (The white wire on the C terminal goes to the A/C.) I connected the unused blue wire (bottom center) to the C terminal.
I recently bought and installed a Nest Learning Thermostat to replace my old non-networked thermostat. I show the installation, demonstrate control from mobile devices, and provide a general review in the above video.
It's been about a month since I installed the device, and I found one important issue yesterday. My Nest dropped off the network for 7 hours, and upon investigation I discovered that the battery was low and it turned off the Wi-Fi radio to save power. Many other people have reported problems with the battery, which is scary because your thermostat is one device that you absolutely want to work 24/7 -- you don't want your pipes freezing when you leave town and the Nest decides to run out of juice!
It turns out that my thermostat wiring, like in many homes, does not provide a "C" wire (common 24VAC) for completing a circuit that provides constant power to the unit. This sort of wiring worked great for old-fashioned mercury thermostats -- it provides a red 24VAC power wire, and "call" wires for turning on the fan, heat, and air conditioning. When the thermostat needs to turn on one of those appliances, it simply closes the circuit between the red wire and the relevant call wire. Smart thermostats rely on batteries to power their smartness when no circuit is closed. When an appliance is running (i.e. one of those three circuits is closed), it can perform "power stealing" to sap power from the closed circuit for its operation and recharging the battery. For simple programmable thermostats, power stealing is probably sufficient. However, for a power-hungry device like the Nest that needs to operate a Wi-Fi radio, this mode of operation can be problematic for several reasons:
- If you live in a nice place like Colorado where you can open the windows and go days without using the heater or air conditioner, the control circuits are never closed and the Nest's battery doesn't have an opportunity to recharge.
- Power stealing is an imperfect backwards compatibility hack, and can't necessarily provide enough current to recharge the battery even when the appliances are operating. This is because the current may be limited by resistance in your furnace's control board.
- When the HVAC appliances are not running and the battery needs to be charged, the Nest performs an even worse hack than power stealing: it pulses the heater call circuit on and off very quickly to steal some power, and hopes that the pulses are short enough to keep the furnace from activating. I haven't noticed any problem with this, but at least one person has found that this wrecks havoc on their heater.
- The Nest uses a "Power Saving Mode" of Wi-Fi to reduce the power consumption of the radio and prolong the battery life. (And hopefully require less overall power than it can steal from the call circuits.) Nest indicates that some non-conformant wireless access points may not fully support this mode, thus causing the Nest to consume more power. (Perhaps more quickly than it can be replenished.)
I was lucky that my thermostat wiring contained an extra, unused (blue) wire, and my furnace's control board provided a 24VAC common terminal for a "C" wire. After hooking up the blue wire at the furnace and the Nest's base, I now seem to have successfully provided a 24VAC "C" wire to the Nest, and hopefully my battery issues are behind me.
I do think that Nest is perhaps overly optimistic about their power stealing and circuit pulsing being able to provide adequate power to the device. There's certainly no warning about this potential issue when you provide your wiring information to their online compatibility tool.
- Undocumented Nest incompatibility with single-stage wiring - Marco Arment of Instapaper fame reports on his experience with this issue.
- Options for adding "C" wire to thermostat - Stack Exchange
- Thermostat signals and wiring - a great resource for understanding thermostat wiring.
- When Nest needs a common "C" wire - The Nest support page for this topic (which somewhat downplays the severity of the issue).
|A Technical Look at Google Fiber||August 12, 2012|
While visiting Kansas City recently, I decided to investigate Google Fiber, Google's ambitious new residential gigabit Internet service they are building in Kansas City, Kansas, and central Kansas City, Missouri. While they haven't connected residential customers to the network yet, they have provisioned service at several local businesses. They also opened a showroom called "Fiber Space" to demonstrate the service to potential customers.
My first stop was the Mud Pie Vegan Bakery and Coffeehouse, a neat coffee house in a historic area of Midtown Kansas City. Mud Pie has the Google Fiber hookup, which customers can use via Wi-Fi or the ethernet-attached Chromebooks which Google has provided. I tried to convince the barista to let me borrow the ethernet connection from a Chromebook so I could test the fast path, but he declined due to Google not wanting people to interfere with their hardware in such a way. However, I found I was able to accomplish most of my investigation goals using a combination of my laptop on Wi-Fi and the wired Chromebooks. I ended up hanging out at Mud Pie for several hours, running tests and chatting with the barista and customers.
Four blocks south of Mud Pie, Google has set up a showroom for Google Fiber called "Fiber Space." It's a very consumer-oriented experience aimed at selling the service to locals. Many Google Fiber employees are on hand to show people what hardware they'll need, and demonstrate the Internet and TV services in virtual living rooms. The "car roller coaster" set from the Google Fiber promotional video and free snacks were also on hand. In addition to the wired Chromebooks on display, people can bring their laptops to try out Google Fiber via the Wi-Fi. However, an employee told me that they didn't allow hooking up to the wire, citing a concern about piracy or illegal activities or some such. (Which sounds like a pretty weak excuse to me.)
Naturally, the first thing people want to know about Google Fiber is how fast is it, really? Unfortunately, it's difficult to reliably measure the practical speed of the service due to the many other bottlenecks that exist once you remove the bottleneck of the last mile. Also, since others have performed plenty of speed tests, I decided to focus more on other characteristics of the network. However, I did run a few throughput tests for good measure.
Here is the result from speedtest.net, running on the wired Chromebook:
I tried running the test against servers in other locales, but the default Palo Alto server delivered the best result. I don't think these tests are great measures of throughput for such high speeds, since not only might the test servers be bottlenecked, but they may not run the tests long enough for the TCP window size to ramp up to the connection's true capacity.
A slightly better test was to download very large files full of random data from various cloud servers:
|data center||file size||time||rate|
|Forethought.net (Denver)||100MB||8 seconds||104.858 Mbps|
|Forethought.net (Denver)||512MB||42 seconds||102.261 Mbps|
|Forethought.net (Denver)||512MB||41 seconds||104.755 Mbps|
|Linode (Dallas)||256MB||72 seconds||29.826 Mbps|
|Linode (Dallas)||256MB||79 seconds||27.183 Mbps|
I don't know why the Linode download was so slow, although the outbound route to that server went out to California, and even across Comcast's network (!) before heading to Dallas. The download from a server at Forethought hit a much higher bottleneck somewhere, but it's difficult to say where.
I performed pings and traceroutes to a number of hosts, to get an idea of Google Fiber's positioning on the network and the available peering points for outbound packets. These tests were conducted from the Wi-Fi network at Mud Pie, so a few milliseconds can be attributed to local Wi-Fi latency (see the first item on the list).
|networkbox||1.766||2.736||5.007||0.909||The local gateway, for reference|
|www.apple.com||Dallas, TX (see notes)||33.024||35.813||39.896||2.499||Akamai CDN node in Dallas, TX|
|www.kcnap.net||Kansas City, MO||75.605||76.977||78.628||0.902|
|www.frgp.net||Denver, CO||19.390||20.931||23.498||1.576||Major peering point in Denver|
|cafbit.com||Denver, CO||48.177||52.195||58.210||3.121||You are here|
|www.facebook.com||Palo Alto, CA||70.686||75.499||80.526||3.325|
|220.127.116.11||Mountain View, CA||39.154||42.498||45.488||2.106||Last Googlebot host to visit cafbit.com|
|b.root-servers.net||Marina Del Rey, CA||50.488||53.980||55.954||1.805|
|d.root-servers.net||College Park, MD||118.310||122.118||134.235||4.868|
|l.root-servers.net||San Jose, CA||47.904||48.733||49.733||0.561||(anycast)|
The full ping/traceroute output is available.
As far as I can tell, outbound packets exit Google Fiber's network via links to either San Jose, CA, or Dallas, TX. In San Jose, Google Fiber seems to be peering with Comcast and XO communications. (Presumably at Equinix's 11 Great Oaks facility.) In Dallas, Google Fiber seems to peer with Level 3 and Google's main network (which is a separate autonomous system from Google Fiber). As you might expect, access to Google services (such as Google Drive and YouTube) is quite snappy from the Google Fiber network.
While on Mud Pie's network, my laptop was assigned an IPv6 address in the fc00::/7 block which is designated for unique local addresses. However, I'm not sure what the point of this is. I definitely could not reach the IPv6 internet via ping6.
Google Fiber is fast. If it was available in my neighborhood, I'd sign up.
UPDATE 2012-12-28: I've made another visit to Kansas City... see my post about plugging into the ethernet at the Hacker House.
|Announcing Valence64: A new platform for a new era||April 1, 2012|
Last year, I wrote an Android app called Valence that allows the user to remote-control the mouse and keyboard of another machine. Always looking for new challenges, I recently decided it was time for Valence to broaden its horizons beyond Android and support additional platforms to reach a wider audience.
In the following video, I demonstrate this exciting new release:
Yes, it's Valence for the Commodore 64. Now you can control your home theater PC easily and reliably from any C64 you happen to have handy. The source code for Valence64 is available on my GitHub under an Apache 2.0 license — bug fixes and feature patches are gladly accepted.
Valence64 requires the following hardware for proper operation:
- Commodore 64 or 128*
- 1541 or 1571 disk drive
- 64K RAM
- A supported ethernet cartridge: RR-Net, The Final Ethernet, 64NIC+ (and probably any other adapter with a cs8900a or lan91c96 chipset)
- One blank disk required
- Joystick optional
* - in C64 mode.
|Lua and Squirrel overhead||February 4, 2012|
I've been researching the idea of using embedded languages in mobile applications as a way of reusing business logic across platforms. I haven't found a lot of information about how much an embedded language will bloat an app's size, so I decided to see for myself. So far, I've written simple "Hello, world" apps for both Lua and Squirrel. Lua is a simple language that has been heavily used in video games for years. Squirrel is a newer language that was inspired by Lua, but uses a more C-like syntax.
These tests are not very scientific, and only demonstrate the bare minimum task of including the language support as a native shared library, and some JNI code to run a script to generate a "Hello, world" message which is returned to the activity.
|Lua and Squirrel app delivery overhead (.apk size differences)|
|language||start size||final size||overhead|
|Lua||12817 (13K)||60089 (59K)||47272 (46K)|
|Squirrel||13530 (13K)||118520 (116K)||104990 (103K)|
|Squirrel (sans compiler)||13530 (13K)||99598 (97K)||86068 (84K)|
I haven't played around with writing code in these languages, though, so I'm curious to hear about people's real-world experiences.
|Using a Mac keyboard in Ubuntu 11.10 with Mac-like shortcuts||October 16, 2011|
I'm trying out Ubuntu 11.10 (Oneiric Ocelot) on a PC with a Mac keyboard attached. I made a few hacks to make the keyboard work smoothly and in a (very roughly) Mac-like fashion. I figured I'd make a few notes here for my own future reference. (Note: I'm using a U.S. keyboard. If you are using a different kind of keyboard, your mileage may vary.)
- Make the function keys (F1..F12) work as function keys without needing to hold down the Fn key.
- Use Mac-like keyboard shortcuts for window navigation (Cmd-Tab, Cmd-`) and the terminal (Cmd-C for copy, Cmd-V for paste).
- Avoid stepping on Unity's use of the Super key (i.e. the command key on Macs and the Windows key on PC keyboards).
- Use the legacy Caps Lock key for something useful.
- Change a driver parameter to enable use of the function keys without holding down the Fn key.
- By default, the keyboard's left and right command keys are mapped to Super_L and Super_R. Map these instead to the seldom-used Hyper_L and Hyper_R keysyms. (If you try to use the Super keys for shortcuts, the Unity dock will appear every time you hold down the command key. It's really annoying.)
- Map the Caps Lock key to Super_L so it can be used for certain Unity shortcuts.
Making function keys work
Create a file in /etc/modprobe.d which sets the fnmode parameter of the hid_apple driver to 2 = fkeysfirst:
echo 'options hid_apple fnmode=2' > /etc/modprobe.d/apple_kbd.conf
Reboot, and the function keys will work without needing to hold down the Fn key. (You can access the volume controls and such by holding down the Fn key.) Thanks to Alan Doyle for reporting on this tweak.
Remapping the keys
I used the xkbcomp utility to remap the keys. I extracted the current keyboard mappings into a default.xkb file, made a copy of the mapping file as mackeyboard.xkb, made the changes to this file, then loaded the new mapping into the running X server:
xkbcomp :0 default.xkb
cp default.xkb mackeyboard.xkb
xkbcomp mackeyboard.xkb :0
I'm attaching my mackeyboard.xkb file and the diff for reference. (Use these at your own peril.) I made the following changes:
- Changed the LWIN and RWIN keycode identifiers to LCMD and RCMD, for clarity.
- Commented out the LMTA and RMTA keycode aliases, to avoid confusion.
- Changed the CAPS keysym mapping from Caps_Lock to Super_L.
- Changed the LWIN and RWIN (now LCMD and RCMD) keysym mappings from Super_L and Super_R to Hyper_L and Hyper_R.
- Changed the modifier mapping so that only the CAPS keycode is used for Mod4. Since Mod3 wasn't previously in use, I mapped Hyper_L and Hyper_R to this modifier.
Configuring new shortcuts
In System Settings -> Keyboard -> Shortcuts, configure these shortcuts:
|Navigation||Switch windows of an application||Cmd+`|
|Windows||Toggle fullscreen mode||Cmd+Return|
In Terminal's Edit -> Keyboard Shortcuts, configure these shortcuts:
I think the biggest benefit of the new Terminal shortcuts is the use of sensible copy and paste shortcuts that don't interfere with using Ctrl-C and Ctrl-V in the shell.
The following improvements are left as an exercise for the reader:
- Have xkbdcomp load the new mapping every time you log in, so you don't have to run it manually.
- Make other applications (such as Google Chrome) recognize Mac shortcuts such as Cmd-C and Cmd-V.
- Figure out a generic way for specifying key translations for specific apps that happen to be in the foreground, similar to the functionality that AutoHotkey provides for Windows. (compiz plugin? resurrect the deprecated XEvIE X11 extension?)
Update, November 7, 2011: AutoKey
In the comments, Nivth Ket brought to my attention the AutoKey tool for mapping arbitrary keys to other keys, phrases, or even Python scripts. This tool seems to use the XRecord extension to X11 to listen to incoming keys. I gave AutoKey 0.80.3 a test drive, and found a few limitations that clashed with my needs. However, with a few hacks, I think I've overcome these limitations and found a solution that seems to work for me so far. The limitations and workarounds are as follows:
- The AutoKey GUI does not allow assigning the same hotkey to multiple actions. This prevents me from assigning a key combination to do one thing in a particular application (i.e. the window title matches "Google Chrome"), and something else in every other application. The workaround is to edit the configuration files in ~/.config/autokey/data directly.
- AutoKey does not have a notion of order semantics for the entries — the entries are processed in a seemingly random order. Therefore, if my entry for "Cmd-V with no window filter" happens to come before my entry for "Cmd-V only for Terminal windows", the former will eclipse the latter, and the Terminal-only rule will never happen. My workaround was to hack AutoKey to always process entries with filters first, then process entries with no filters. Here is the patch.
- AutoKey does not support the little-known "Hyper" modifier key, which I use in my layout for the "command" keys. My workaround was to hack AutoKey to support the Hyper modifier. Here is the patch.