Monday, March 04, 2013

Updating a System Shock 2 Wallpaper for HD Resolutions (in Javascript!)

Back when I was in college, I was a big System Shock 2 fan. My favorite co-op experience of all time was when my dorm roommate and I played SS2 together. I had all kinds of ideas for case mods (even though I had neither the money nor the tools to make it happen). Ultimately, my only creative contribution to the world of Shock was to combine two wallpapers that were floating around the net into one of my very own:

Yeah, I was pretty pleased with myself back in the day. In any case, I wanted to commemorate the recent GoG re-release of SS2 by inviting Shodan to adorn my desktop yet again. Unfortunately, screen resolutions have increased quite a bit in the intervening years, and a pixelated Shodan simply won't do. Fortunately, in the GoG re-release, they included a ludicrously high resolution 5100x3338 pixel render. All I need to do is to scale that down, generate the ASCII half, blend them, and Bob's your uncle.

I'm sure that there are a lot of image to ASCII generators out there, but I can't shy away from a chance to learn something, so I decided to try to write my own. That's not even the interesting part of the story. Because I'm a masochist, I decided to do it with HTML and Javascript. I figured that, between the drag-and-drop API, canvas, and a high-performance JS engine like V8, I could probably get away with it.

Many hours later, I have something that basically works. I'll probably clean it up and get it posted to GitHub. It wasn't too hard to allow dropping an image file onto the page. I end up doing a lot of work against a scratch canvas before finally dumping the output into an image element using the HTMLCanvasElement toDataUrl method. This is great; I can then drag the image off the page and onto my desktop (something that the canvas element doesn't automatically do). Even though the data URL is ridiculously long, it correctly displays on the screen. However, when I was working with the original 17 megapixel image, I found that dragging the output image out of my browser would immediately crash the Chrome tab. Fortunately, Chrome has no problems with the image at my target resolution (1920x1080).

Because this extremely long data URL feels pretty sketchy, I looked to see if there was a way around it. I would love to output the results to a canvas element instead of an image. All I need is to use the DnD API to make the canvas a valid drag source. Of course, in order to do that, I need to be able to generate the PNG bytestream, as well as synthesize a File object in the browser. While it's definitely possible to build a pure-JS PNG encoder, I don't see any way to synthesize a File object. Although the DnD spec specifically asks for a File, maybe it would be happy with an arbitrary Blob instead; I don't know, and I haven't yet tried. If the spec doesn't support this use case, it's a shame; I can think of a number of cases where it would be neat to generate a file from client-side JS.

I like to think that my skills of an artist have improved in the intervening years as well. A little stylistic shading, and here is the result.

I used a different technique to generate the digital side (the original used 0s and 1s and modulated the intensity on a pixel-by-pixel basis; I achieve my shading by choosing from a larger palette of characters). Still, I feel like the end result has the same tone as the original. And just like last time, I'm pretty pleased with myself. Let me know what you think!

Edit: The code is available on GitHub. You can try it out on my site.

Saturday, January 14, 2012

Using Apache on Mac OS X to serve files outside ~/Sites

I'm working on a web project that basically contains just static HTML and Javascript. (well OK, there's also one, small PHP script, but it might be going away in the near future). I tend to keep all my source code in ~/src, but to host it, I also need it to appear in ~/Sites. After some small trial-and-error, I ended up putting the everything (git repo and all) into ~/Sites, and then symlinked it to ~/src. It wasn't pretty, but it worked.

So I just did some reorganization that pretty much invalidated that old structure. In particular, I have moved everything that needs to be deployed into ~/src/project/web. However, I want it to be accessible via http://localhost/me/project. I tried physically moving the project back into ~/src, and then making a symlink to the subdirectory, but that didn't work. Apache would still produce 403s for all the relevant files. So I had to roll up my sleeves and dive into Apache configuration.

Before I go further, I'm compelled to pull out the old soapbox. I have painfully little experience with Apache - I have never had to configure or support it in a production environment, and that makes me happy. From this position of ignorance, I have decided that Apache is a dinosaur that should have died a long time ago. For example, instead of configuring the server from the request's point of view (as has been popularized by Rails' routing logic), it is configured from the filesystem's point of view. The default Mac OS configuration has, buried somewhere in the middle of the file, a directive that disallows the serving of all files under /. Because, I guess, they would be served by default if that directive wasn't present? But still, nobody seems to want to spend the time to produce a replacement web server, and so we struggle on. </rant>

The default Apache install on MacOS 10.7 uses a split apache configuration file. The bulk of the configuration is in /etc/apache2/httpd.conf. However, each user also gets their own /etc/apache2/users/me.conf, which are all imported into the main configuration. And while the main configuration file specifies the FollowSymlinks option, I discovered that the same is not true in my personal config file. All I had to do was to add the FollowSymlinks option to that configuration file, restart Apache, and everything started working.

So if you have only basic web serving needs, the default config should suffice. If, however, you want/need to spread the files around your disk, you need to mess with the Apache configuration.

Monday, September 05, 2011

Mysterious, Blank User in 10.7 Sharing Dialog

I wanted to copy some files from my PC to my Mac. When I went to turn on SMB sharing, I came across this:

I was wondering about the identity of this phantom user. It turns out that it is the macports user. He doesn't show up on the login screen. He never showed up under 10.6. Apparently, Apple changed something about the way users are reported to applications.

If it bothers you, you can fix it with dscl.

sudo dscl . -create /Users/macports RealName macports

The first parameter is the machine you want to administer; . is apparently a shortcut for localhost. Then we give the command - we want to create a new key. Then we specify where this key should be created - in this case, the macports user's Directory Services path. Next is the name of the key - RealName is what appears to be used by the sharing dialog. (RealName is also assigned on users you create through System Preferences). Finally, we provide a value for this user's name. Now, we have this:

Tuesday, September 14, 2010

DES Encryption As Used in VNC Authentication

A few notes about how DES is used in the VNC Authentication security type (type 2)

  1. DES is used in ECB mode.
  2. The ECB Key is based upon an ASCII password. The key must be 8 bytes long. The password is either truncated to 8 bytes, or else zeros are added to the end to bring it up to 8 bytes. As an additional twist, each byte in flipped. So, if the ASCII password was "pword" [0x 70 77 6F 72 64], the resulting key would be [0x 0E EE F6 4E 26 00 00 00].
  3. The VNC Authentication scheme sends a 16 byte challenge. This challenge should be encrypted with the key that was just described, but DES in ECB mode can only encrypt an 8 byte message. So, the challenge is split into two messages, encrypted separately, and then jammed back together.
Here is some pseudocode (in Erlang) that should explain better than words can.
password_to_key(Password) ->
    Flipped = lists:map(fun flip_byte/1, Password),
    Truncated = truncate(Flipped, 8),
    pad(Truncated, 8, $\0).

encrypt_challenge(Password, Challenge) ->
    Key = password_to_key(Password),
    <<High:8/binary, Low:8/binary>> = Challenge,
    EncHigh = crypto:des_ecb_encrypt(Key, High),
    EncLow = crypto:des_ecb_encrypt(Key, Low),
    <<EncHigh/binary, EncLow/binary>>.

Wednesday, July 28, 2010

Unpacking a Safari Extension

So, now that Safari extensions are official (and not just a developer curiosity), I decided to see what people had managed to make over at the extension gallery. It looks like there are some cool ideas out there. I was somewhat interested in the Exposer extension, which sounded a bit like Exposé for Safari. It seems like it kinda works, except that it doesn't always bring up the list of windows, and it's also really slow (it looks like visibleContentsAsDataURL is the culprit, natch, plus I have dozens of tabs open at a time).

Anyway, while I was checking it out, I realized that I had no idea what some of these Safari extensions were doing in the background. Stop and think for a moment; do you really want to run code that some Jimmy wrote in his basement to be able to watch everything that you do in your browser? Maybe I'm just paranoid, but I'd like to know what is really going on.

So, naturally, I tried unpacking an extension. It wasn't particularly hard, but you have to realize that a .safariextz file isn't a ZIP archive. It's a XAR. I know; I opened it up in my hex editor.

Here's how you can unpack one for yourself:

xar -xvf extension.safariextz -C ~/Desktop

Don't worry; there's a directory just inside the safariextz archive. Now to see if there's anything malicious in these extensions. (Exposer looks clean so far.)

Monday, June 28, 2010

Thursday, April 29, 2010

Even More Space Marines

I've spent some more time on my Space Marine squad (it's going on something like 6 months at this point - I just don't get a ton of time to paint). Anyway, the tactical squad is basically done. Notice the highlights.

In addition, I started to work on some figures from Assault on Black Reach, including some terminators. In particular, I spent quite a while trying to get the white helmets to look good. I'm also pretty pleased with the eyes. No Golden Demons here, but not bad for tabletop.

Finally, I had a spare marine sitting around, so I decided to make him up as a Blood Angel. I have two copies of Space Hulk that I want to paint, but I wasn't going to do so until I was ready. I think I'm almost ready. Also, another shot of the terminators' eyes.

Thursday, April 15, 2010

Things I Learned While Debugging an SSL Issue

  • SSL is sometimes actually TLS. SSL is apparently on the way out, though TLS is only supported in a subset of common browsers. Fortunately, both use the same kind of certificate, so it's mostly transparent.
  • Java 1.6u17 removed SSL client support for MD2-signed root certificates. Except it sometimes didn't. Some u17 installs worked for me, some failed. 1.6u19 failed every time. If you have a Java client connecting to a SSL server, make sure that the server certificate was generated against a SHA1-signed root certificate.
  • WireShark will analyze both SSL and TLS. If there's any confusion about what is coming from the server, WireShark can help you figure it out.
  • The server sends the whole certificate chain to the client. I had thought that this was the case, but I had a hard time finding the documentation that spells it out. In the end, I used WireShark to find out.
  • Web browsers sometimes lie. When I would ask the web browser for the certificate chain, it would tell me something different from what the server actually sent. The root certificate from the server was signed with SHA1, but the browser would tell me that it was signed with MD2. This occurred in Internet Explorer, Firefox, and Safari. This was also a red herring that caused me to waste a lot of time.
  • Make sure you are looking at the right server. I had made an assumption about how the Java client software talked to the server, and that assumption was incorrect. In the end, the problematic certificate was on a different server altogether. Go figure.

Friday, January 29, 2010

Deriving the Y Combinator in Erlang - Part 2: Abstraction

This is the second post in a series on the Y combinator. Part 1


In the last post on the Y combinator, we established that some functional languages (such as Erlang) make it hard to have recursive, anonymous functions. Namely, there is no name that is bound to the "current" anonymous function. Furthermore, we established that we could work around this problem by passing the anonymous function to itself. When we concluded, we had derived this much:

Helper = fun(Helper2, N) ->
    case N of
        1 -> 1;
        _ -> N * Helper2(Helper2, N - 1)
    end
end,

Fact = fun(X) -> Helper(Helper, X) end

This post will focus on extracting the Y combinator from the above code. We will follow a sequence of transformations, each with a specific intent. In the end, we will hopefully have some beautiful code.

Reducing the Number of Arguments

This factorial algorithm started with a function that took a single parameter. This function called itself with successively smaller values, until it reached a base case. However, we muddied the waters by passing the function around as well. We would like to return to having a one-parameter function. We can accomplish this by creating another level of closure:

Helper = fun(Helper2) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * (Helper2(Helper2))(N - 1)
        end
    end
end,

Fact = fun(X) -> (Helper(Helper))(X) end
Now it is clear that our recursive function is really only calling itself with one parameter.

Simplifying the Recursive Function Call

The place where we make the recursive call is rather ugly. We have to build the function upon which we will recurse before we can actually call it. It would be better if the recursive function was explicitly named. We can do that, too:

Helper = fun(Helper2) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> 
                PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
                N * PAHelper2(N - 1)
        end
    end
end,

Fact = fun(X) -> (Helper(Helper))(X) end

We call the new identifier PAHelper2 to suggest that it's a partially-applied version of Helper2.

We can simplify the body further by moving PAHelper2 to a higher scope.

Helper = fun(Helper2) ->
    PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * PAHelper2(N - 1)
        end
    end
end,

Fact = fun(X) -> (Helper(Helper))(X) end

As an aside, you might find this step overly complicated. You may ask, why did we not simply define PAHelper2 = Helper2(Helper2)? Why the extra indirection? We don't actually want to evaluate Helper2 just yet. In fact, if we were to do so, we'd end up in an infinite loop. If the first step of Helper is to immediately call Helper2 (an alias for Helper), we'll be recursing on ourself with no way to ever terminate.

Extracting the Anonymous Function

Right now, the body of our algorithm is embedded deeply within some necessary plumbing. We would like to extract our algorithm from the center of this. This is quite easy:

Helper = fun(Helper2) ->
    PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
    FactRec = fun(Self) ->
        fun(N) ->
            case N of
                1 -> 1;
                _ -> N * Self(N - 1)
            end
        end
    end,
    FactRec(PAHelper2)
end,

Fact = fun(X) -> (Helper(Helper))(X) end

Now we can pull it outside the body of Helper.

FactRec = fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end,

Helper = fun(Helper2) ->
    PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
    FactRec(PAHelper2)
end,

Fact = fun(X) -> (Helper(Helper))(X) end

Simplifying Fact

We're getting close, but the definition of Fact still leaves something to be desired. However, in order to make it simpler, we have to first make it messier. Start by moving Helper inside the definition for Fact:

FactRec = fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end,

Fact = fun(X) ->
    Helper = fun(Helper2) ->
        PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
        FactRec(PAHelper2)
    end,
    
    (Helper(Helper))(X)
end

Our goal is to build a general-purpose function Y that takes a function F and produces a self-recursive version of that function. Right now, the innermost part of Helper makes an explicit reference to FactRec. We want to eliminate that explicit reference:

FactRec = fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end,

Fact = fun(X) ->
    Y = fun(Proc) ->
        Helper = fun(Helper2) ->
            PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
            Proc(PAHelper2)
        end,
        Helper(Helper)
    end,
    (Y(FactRec))(X)
end

Now that we've done this, Y no longer has any bound variables, so we can pull it out of Fact completely:

FactRec = fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end,

Y = fun(Proc) ->
    Helper = fun(Helper2) ->
        PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
        Proc(PAHelper2)
    end,
    Helper(Helper)
end,

Fact = fun(X) ->
    (Y(FactRec))(X)
end

Of course, if we want, we can simplify some of these definitions. Y can become a normal Erlang module function (rather than a function value). Fact itself can be curried - we can eliminate the noise of the explicit parameter. Also, FactRec doesn't need to be named anymore - it can become the anonymous function that we originally intended:

y(F) ->
    G = fun(G2) ->
        F(fun(Z) -> (G2(G2))(Z) end)
    end,
    G(G).


Fact = y(fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end)

Unfortunately, the y function only supports functions that take a single parameter. Some languages have a "splat" operator that can be used to represent "all the parameters;" unfortunately, Erlang does not. Instead, it can be useful to define a family of y functions that deal with functions taking more than one parameter:

y2(F) ->
    G = fun(G2) ->
        F(fun(Y, Z) -> (G2(G2))(Y, Z) end)
    end,
    G(G).

Conclusion

We have shown the difficulty in defining recursive, anonymous functions in Erlang. We showed a simple solution to this problem, and then generalized the plumbing to make it easier to use. While this is not necessary in all functional languages, I hope that this is useful to anybody working in a strict language, such as Erlang.

I am planning more posts on this topic. One post will explain the strange step I took in Simplifying the Recursive Function Call. Others will explain just what a fixed point is, what the fixed point combinators are, and how the Y combinator satisfies the definition of a fixed point combinator.

Monday, January 18, 2010

Deriving the Y Combinator in Erlang - Part 1: Intuition

This is the first of a series on the Y combinator. Part 2


When I heard about the fixed point combinators before, I didn't know what to make of them, so I filed the topic away in my brain. However, when I was working on implementing continuations in Erlang, I ended up building a small structure that reminded me of the Y combinator. With a little massaging, I extracted the actual Y combinator, and proceeded with what I was working on.

The actual definition of the Y combinator is insanely dense:

Y = λf·(λx·f (x x)) (λx·f (x x))
This is precisely why I didn't understand it at first - that definition means nothing to me. We need a more intuitive way to think about it.

Suppose you decide to write the factorial function in Erlang. A simple (by which I mean unoptimized) implementation might look like this:

fact(N) -> 
    case N of
        1 -> 1;
        _ -> N * fact(N - 1)
    end.
There's nothing particularly complicated here - we're just calling fact recursively. But what happens if you try to make fact into a fun (an anonymous function in Erlang). Watch:
Fact = fun(N) -> 
    case N of
        1 -> 1;
        _ -> N * ??? (N - 1) %%How do we call ourselves? Fact isn't yet bound!
    end
end.
In some languages, we could replace the ??? with Fact. Unfortunately, Erlang doesn't let you do this. If you tried, Erlang would say that Fact is unbound. This is true - until we've finished defining the fun, we can't assign it to Fact. Other languages provide you with a magic variable that represents the current function (Javascript has arguments.callee). Again, as far as I know, Erlang doesn't provide such a variable. Does that mean that we have no hope?

Let's look at this problem one step at a time. We need something to stand in for the ???. We need a name that represents the current, anonymous function. Where can we get that from? In functional Erlang, there are only three ways that names are bound - by closure, by parameter, or by local definition. We can't close over it, because the anonymous function isn't yet defined. We can't create a local definition, because the local scope is too narrow a scope for that. That leaves only one possibility - we need to pass the anonymous function to itself.

Helper = fun(Helper2, N) ->
    case N of
        1 -> 1;
        _ -> N * Helper2(Helper2, N - 1)
    end
end.

Fact = fun(N) -> Helper(Helper, N) end.
OK, so we created a helper function - more on that in a minute. Helper (formerly Fact) now takes an extra parameter, which just creates another name for the current, anonymous function. Since we intend for that to be the same as Helper, we call it Helper2. We know that Helper is a fun/2. Since Helper2 is supposed to be another name for Helper, then Helper2 must be a fun/2 as well. This means that, when we call Helper2, we need to also tell it about itself - that is, we need to pass Helper2 along when we call Helper2 to recurse.

Now that leaves us to deal with the function Fact. Clearly, Fact needs to call Helper. We noted that Helper is a fun/2, so again, we need to call it with two parameters. The intent of the extra parameter to Helper was to be able to pass Helper to itself, so we do just that.

Believe it or not, we have just derived the concept behind the Y combinator. We have invented a scheme that allows an anonymous function to know itself, even in a language that doesn't directly support this. This is (I believe) the purpose behind the Y combinator. However, we're not yet done. There is still some cruft that we would like to eliminate. In particular, we hand-built the plumbing to route Helper2 around. We would like to use higher order functions to eliminate this. This is what the Y combinator does - it manages the plumbing of anonymous functions that refer to themselves.

In the next part, we will continue the derivation of the Y combinator in Erlang. Our goal is to eventually be able to write something like this:

Fact = y(fun(Self) ->
    fun(N) ->
        case N of
            1 -> 1;
            _ -> N * Self(N - 1)
        end
    end
end).
It's not perfect, but in a language that doesn't directly support anonymous function recursion, it's not too bad!

Monday, December 07, 2009

Even More Space Marine Painting

I've been slowly working on my Space Marines. It's taken a while, but they almost look like a unit. I think I've spent between 10 and 20 hours on them, but much of that was spent learning. Most of the major painting is done, and now it's time for touchups and details. For example, I spent some time on that rocket launcher to make it appear to be metal, painted red, and then worn. I think it's pretty convincing. I intend to do the same with that red bolter. The one that's not wearing armor needs a lot of work. Enjoy.

Tuesday, November 17, 2009

Standard algorithms and boost::ptr_vector

I did something bad the other day.

OK, I can't tell if it was bad. In another environment, it would have been bad, but since this was C++, perhaps it was OK. I was in the situation where I had a boost::ptr_vector, and I wanted to use a standard algorithm on it. Specifically, I wanted to use std::partition to separate the objects that were still "alive" from those that were "dead" (where alive and dead are domain concepts in our application). The complexity here is that ptr_vector is a crazy container.

Most containers deal with a specific type T. You add Ts to the container. Dereferencing an iterator gives you a T&. It's generally assumed that a container operates on a single type, and the standard algorithms make this assumption.

The ptr_vector, on the other hand, appears to be two containers at once. Semantically, it's analogous to a std::vector<managed_ptr_type<T> >. It is intended that, by adding a pointer to the ptr_vector, the ptr_vector takes ownership of the lifetime of the memory at the end of the pointer. So, it is a container of pointers. On the other hand, when iterating a ptr_vector, it appears that it is a container of Ts.

In my case, I wanted to rearrange my ptr_vector. In particular, I wanted to partition the pointers into those whose object was still "alive", and those whose object was "dead". Since a ptr_vector is semantically a container of pointers, it made sense that I should apply std::partition to the ptr_vector. However, ptr_vector::iterator removes a level of indirection: instead of iterating T*, it iterates T&.

In fact, ptr_vector doesn't seem to provide any ways to rearrange the pointers once they are put into the container. Sure, you can mutate the object on the end of the pointer. You could operate at that level. But there doesn't appear to be a safe way to treat the ptr_vector as a container of pointers.

Fortunately, ptr_vector provides a back door. Its iterators support a base() method, which will return an iterator over T* (instead of an iterator over T&). This allows us to treat the ptr_vector as a container of pointers, and to use standard algorithms to manipulate those pointers. Now, this is not without peril. While it seems to be OK to rearrange the pointers, it wouldn't be safe to change the set of pointers. I wouldn't trust using something like std::remove_if, because it might leave garbage in the container after it is done. The container might contain duplicate pointers. Some pointers might get dropped completely. If the container then goes out of scope, it will try to delete these pointers multiple times, which would be a bad thing. It might also fail to delete some pointers, because they were overwritten (and not preserved elsewhere in the container).

This whole thing felt like the best solution possible, while at the same time leaving a lot to be desired. I felt like I was violating the encapsulation of the ptr_vector. I suppose this is one of those cases for which they put in the base() methods on the iterators. Additionally, I don't see any clear way that they could do better. For example, I think an assumption of ptr_vector is that a given pointer only occurs inside it at most once. The standard algorithms don't necessarily respect this assumption; see my commentary on remove_if in the previous paragraph. The standard algorithms, in some cases, expect more freedom than ptr_vector can provide. This disconnect is unfortunate, but not without reason.

An important first step to helping with this problem would be to add methods to ptr_vector (and its siblings) that allow you to treat it as a container of pointers. You could add, remove, and re-arrange the container using these methods. In addition, they could maybe provide specializations of some of the standard algorithms for each container. This is difficult for third party developers to do, since the actual type of a ptr_vector::iterator is implementation defined. The boost guys can cleanly provide a specialization of std::partition for this kind of iterator, but I can't. Now, this isn't perfect. It would help with the standard algorithms, but not third-party algorithms. Still, it would be a great step in the right direction.

So, did I do something bad, or did I do something necessary?

Monday, November 02, 2009

Why Google Experience phones are pretty awesome

As Android has grown, devices fall into one of two major classifications. Some devices are so-called "Google Experience" devices (featuring the phrase "with Google" somewhere on the device). Other devices are, well, not Google Experience devices. What is the difference? I've had a hard time figuring it out.

I think that Google Experience phones are updated by Google itself, while the rest of the devices are supported by the phone's manufacturer. I have an original G1 (a Google Experience phone), and I've gotten prompt updates as each new Android OS version has been released. This is similar to the experience that iPhone users enjoy.

Some devices, such as the HTC Hero and the Motorola Cliq (and the HTC Magic in certain regions), are not Google Experience phones. These phones were released with heavily customized software (such as HTC's Sense UI or Motorola's Motoblur). These customizations, while attractive to some users, also make it much harder for the phone manufacturer to update to a new version of the base Android OS. Both the Hero and the Cliq shipped with Android 1.5, and I don't believe that there are announced plans to update either to 1.6 (or 2.0, for that matter).

At first, I thought that the notion of a Google Experience phone was silly. At the time, the Magic was launching on Rogers with Exchange support, and that somehow disqualified the phone as being a Google Experience device. I now understand that Google Experience really means "unforked code base". In order to add Exchange support, I suspect that HTC had to fork and modify the standard Mail app. While they were able to add a feature that people wanted, it really just makes these phones into some sort of mutant Android device. No thank you. Google should really make it clear to users that the Google Experience is a feature in and of itself.

Android, at this point, is a rapidly evolving platform. Google Experience phones seem to be the best way to keep up with this evolution. I was pleased when I heard a Verizon rep say that the Droid will be a Google Experience phone. Now they just need to release a T-Mobile US GSM version, and I'll be happy. Over time, Android evolution will slow down, and then it might make sense for a manufacturer to fork the Android code base. Maybe they would even be willing to contribute back to the core distribution. But, until then, I'm sticking with Google Experience devices.

Fixing hard disk clicking / aggressive head parking on Mac OS X

I recently bought a Western Digital Scorpio notebook hard drive to put into my 2007-vintage Macbook Pro. Everything seemed fine at first. However, as I used my laptop, I noticed that it would frequently make a quiet clicking noise. At first, I thought that I had gotten a bad disk. However, after doing a little research, it became clear that this is a common problem. This clicking is a "normal" operational noise - it is the sound of the heads parking.

People say that you should just get used to the noise. However, this blog post makes the argument that every one of these clicks is killing your hard disk. Some people claim that this is related to the sudden motion sensor that's built into most (if not all) Apple portables. However, this is a red herring. The disk still clicks even if it is sitting on a table. It is the hard disk's own built-in power management that is causing the head parking. The disk's SMART statistics record the number of head parking cycles. If you want to see this for yourself, you can use either this menu extra or this command line tool (MacPorts). You are looking for the Load Cycle Count value.

To explain the problem (as I understand it), modern hard disks have some responsibility to manage their power consumption. One manifestation of this is to spin down the platters and to park the read/write heads. The operating system can influence the time before the heads are parked by setting the "APM Level" of the drive to a value between 0x00 and 0xfe. Each drive manufacturer is free to interpret this value as they see fit. Mac OS X seems to set a default APM Level for all disks, and I think this value is 0x80. This is fine with Apple-shipped disks, but not necessarily for third party disks.

But wait! Perhaps you have bought the same kind of drive that Apple ships in their laptops. Are you safe? Not necessarily. Allegedly, Apple flashes their own firmware onto the the hard disks that they install at the factory. That's right, you're not running stock disk firmware. My suspicion is that this firmware changes the drive's interpretation of the default APM level. Recently, there was a firmware update from Apple that fixed this problem on disks that were shipped by Apple. Unfortunately, you can't use this utility to flash the new firmware onto a non-Apple drive.

Right, so the two solutions that I see are either:

  1. Write our own firmware
  2. Set a different APM Level value
Obviously, option 2 looks much more attractive. Bryce McKinlay wrote a utility called hdapm for doing just that. He even includes a launchd configuration to run hdapm as the system starts. One thing not mentioned in the readme is that you need to get the permissions of the launchd config file correct. The file needs to be owned by root (preferably root:wheel), and must not be group- or world-writeable. I also changed the config file a little; here is my version:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>Label</key>
 <string>hdapm</string>
 <key>ProgramArguments</key>
 <array>
  <string>/usr/local/bin/hdapm</string>
  <string>disk0</string>
  <string>max</string>
 </array>
 <key>RunAtLoad</key>
 <true/>
</dict>
</plist>

The biggest change is that I removed the "LaunchOnlyOnce" and "ServiceDescription" keys. I didn't see a reason to load it only once, and ServiceDescription seemed undocumented. This solution isn't perfect, however. First of all, hdapm uses a seemingly undocumented back door to adjust the APM setting. Ideally, we would actually spawn a daemon that continuously monitors and adjusts the drive's APM level. I'm not yet convinced that Mac OS X won't override my setting. Still, I have been running with this configuration for a couple of days, and things seem to be working well.

I have an open support issue with Western Digital to see if they have a fix for this issue. If there were some way that we could change the way the disk behaves under OSX, we could forego the additional software, which would be great. I've also heard of a utility called wdidle, which allegedly lets you write new idle settings to the hard drive. However, I was unable to find any official site for this software, so I'm not using it.

Finally, I would like to thank two people. First, Doug Aghassi's post really explained the symptoms that he was experiencing and put me on the right track for solving the problem. Thanks, Doug. Also, Bryce McKinlay was kind enough not only to write the hdapm utility, but also to answer the questions that I emailed to him. Thanks, Bryce.

Sunday, September 20, 2009

More Painting Space Marines

I have an update to my Space Marine painting. I've continued to shade the marines. After the previous coat of 1:1 Regal Blue to Ultramarine Blue, I added the following:

  • 1:2 Regal Blue to Ultramarine Blue
  • Ultramarine Blue
  • 2:1 Ultramarine Blue to Codex Grey

Each coat is painted in a slightly smaller area. The goal is to shade the model to match an imaginary light source. In my case, my light source is directly above the model. Here are the results:


I'm now only painting 2 marines. I'm going to finish them before starting others, so that I can improve on my technique for those later models.

I also started painting my Tyranid spores. This has been basecoated with Chaos Black spray, then painted with Blood Red, then washed with Chestnut Ink. I had tried Red Ink, but unfortunately, that color is too close to Blood Red.

Wednesday, September 16, 2009

Painting Space Marines

For some reason, I got an itch to paint some Warhammer 40k figures. I've been slowly assembling them over... I don't know, maybe 2 years. Hey, I have a lot of things that I do in my free time. Anyway, I finally got around to priming them the other day, and I've been painting like a crazy person since then.

Now, I should mention that these aren't the first figures I've painted. I had painted a squad of 5 marines that came in a box along with 6 paints and a brush. Here's one of them.

That was a good learning experience. This time, I have quality glue, basing material, spray primer, a variety of brushes, and lots of paints and inks. I'm posting some photos of various steps of the process. Hopefully, by putting them here, I will actually finish painting them.


This space marine has been assembled, based, and primed. The basing material is sand and rock, glued into place with regular white glue. The whole model is sprayed with a flat black primer, to help the other paint stick better.

This marine has had his armor painted with Ultramarine Blue. Actually, his feet are missing some paint, but I'll get around to that. Additionally, the black ground cover has been drybrushed to look like sand and rocks. I started with rocky sand, painted it black, then painted it to look like rocky sand again. Crazy? Probably.

This marine has been washed with a blue ink (which I think has been replaced with Asurmen Blue Wash). Ink is used because it settles in the crevasses and provides great depth.

This marine has had his armor panels painted with a 50/50 mix of Regal Blue and Ultramarine Blue. By leaving a slight gap around the edges, the blue wash peeks through the armor panels, and this looks great.

For those who don't know much about 40k, these figures are pretty small. Here's a comparison shot.

Now imagine trying to paint those eye lenses. Yeah, I'm not looking forward to it, either. Besides the eyes, I still have a lot to do. I plan to put another few layers of blue on the armor, paint the shoulder pad edges, drybrush the metal pieces, and so on. If anybody reads this and has feedback or suggestions, I'd love to hear from you!

Friday, August 07, 2009

I wish Blogger would let me rename tags

Occasionally, if you read this via RSS, you will notice that I repost old articles. This isn't intentional - any time I edit or retag a post in Blogger, it puts it back on the feed. I don't see any way to say "quietly republish this post". The same problem occurs if I want to rename a tag - I have to remove the tag from all posts, and then re-add it to all posts. Please, Google, add features to Blogger to make this less painful.

T-Mobile Visual Voicemail Problems

T-Mobile recently released their Visual Voicemail application in the Android market. It had some launch problems, but those are mostly smoothed out at this point. The app works pretty well, and I'm glad that they have finally implemented it. However, the app does have its share of first-release problems. They are listed here, in the order that I hope T-Mobile addresses them.

VVM doesn't work with Wifi. Most people probably have Wifi enabled on their phones. After all, it's the most efficient (both bytes / time and power / byte) way to tranfer data. However, VVM doesn't work with Wifi. It will neither notify you of new voicemails, nor will it download new messages. In order to make it work, you need to

  1. Turn off Wifi
  2. Wait or click "Synchronize Voicemails"
  3. Turn Wifi back on
Call me crazy, but that's just stupid. At the very least, the "Synchronize Voicemails" button should do those steps for you, similar to the My Account app. There has been some FUD about the reason for this omission. Some people claim that it's for "security". I'm going to make this perfectly clear: there is no security-related reason to prevent people from getting their voicemails over Wifi. It's easy to encrypt data that is transmitted over the internet. There are a number of possible reasons they don't support Wifi. It might just be too much work. Maybe they haven't had time yet. It might be cost prohibitive. Perhaps there is a technical restriction - they would need to read the SIM's IMSI into the app, which Android might not allow. Whatever the case, it's not a security issue.

VVM has a separate notification icon. Every time you get a new voicemail, you get the standard voicemail icon. In addition, you get a new VVM icon. For now, this is fine. If I have Wifi enabled, I still get notified of a new voicemail (via the standard voicemail icon). When the Wifi issue is fixed, however, I would like to see the new icon go away. The notification bar is crowded enough.

VMM uses the wrong audio stream. VVM uses the "media" audio stream. Many people complain that prevents you from using a bluetooth headset to listen to your VM. I don't have a bluetooth headset, so I can't confirm this. It should use the "phone" audio stream.

The UI needs polish. There are some small look-and-feel issues:

  1. The VVM status bar icon doesn't match the Android UI Guidelines.
  2. After pressing the "Synchronize Voicemails" button, there is no feedback. No spinner, no progress bar, nothing.
  3. The long-press context menu on a VM does not include a "delete" option (only Open, Reply As, and Copy To)
  4. The buttons that appear when you press the "Menu" button have no icons.
  5. The "Copy to" screen is a little too technical. The file name defaults to vmn (i.e. vm0, vm1, vm2). It should instead default to something like "Voicemail from John Smith on 22 Jul, 2009". In addition, the save directory defaults to "/sdcard". Should users really be exposed to UNIX pathnames? Clicking the Save in Directory dropdown presents me with a file browser for my SD card. For me, this lists locations like ".Trashes" (I use a Mac), espeak-data (the data files for the Text-To-Speech engine), "where" (the data for Where), and other places that I probably shouldn't be saving random files. Do I really need to be able to specify the location to save the voicemail? Why not just save everything to /sdcard/voicemails? Or at least, why not assume that all voicemails get saved to /sdcard/voicemails or a subdirectory (i.e. you can't save a voicemail outside this directory, only inside)?

Initial, first-run experience is lousy. When I first installed and ran the app, it wasn't able to connect to the server. After disabling Wifi, it worked. I was taken to a set-up screen, but then got distracted by something and hit the back button. When I relaunched the application, the set-up screen wasn't presented. This worried me (is there some setup that needed to occur), so I uninstalled the app and re-installed it. I don't think I did any harm, but the app didn't behave as I expected, so I didn't know what to think.

Deleting doesn't always work. I'm going to chalk this up to glitch behavior. The first time I used the app, I went through and deleted some old messages. Then I went into the analog voicemail system, and they were back! I deleted them a second time, and now they're really gone. shrug

Now, I don't mind all of those problems. I'm glad that T-Mobile finally released a VVM app. I'm glad that they released it early, warts and all. I hope that they are not done working on it. For me, the Wifi issue is huge. I'm connected to Wifi 90% of the time, and that means that the VVM app doesn't function as a voicemail app 90% of the time. I suspect many other people are in the same boat as me. Furthermore, Google Voice is coming. If the Wifi issue isn't fixed by the time GV is generally available, I might just jump ship, and T-Mobile doesn't want me to do that. I understand if T-Mobile can't fix this on their own - they might need support from Google. Still, every carrier is going to want to provide VVM, and it would behoove Google to provide whatever support necessary.

Saturday, July 18, 2009

Getting Started with Stack Overflow

I joined Stack Overflow shortly after it launched, but I didn't do anything with it. I found it in search results here and there, but I never asked any questions. I would have done more, but new users are pretty helpless. You can't vote up or down, you can't comment on answers, you can't post an answer with more than 1 link, etc. It's almost like you're not wanted. Compared to the relative freedom of Wikipedia, it was really demoralizing to me.

I decided tonight to actually try to get some reputation. Most of the interesting stuff happens around 50 reputation, so that's my goal. I answered 2 questions this evening. Suddenly, my rep is skyrocketing. I'm at 31 right now, and I bet that will continue to climb on its own. It seems that people are very willing to vote your answers up if they are relevant. As you can see, it shouldn't be hard to get to the point of actually being able to contribute.

So, if you want to get started with Stack Overflow, here are my suggestions.

  1. Go to the newest questions page.
  2. Find something that you know something about. Don't troll, and don't post to random topics about which you know nothing.
  3. Write an answer.
That should just about do it. Don't despair, it's easier than it initially seems.

Thursday, July 16, 2009

Fixing the Xbox 360 's Grinding Noise / Tray Ejection Problem

I spent an evening performing unexpected surgery on my Xbox 360. When I put a game in, the drive made the most horrible grinding noise. On top of that, the drive would not stay closed. The tray would almost always eject seconds after being closed. Research led me to conclude that the rare earth magnet that is part of the disc clamp had probably become unglued. Since my initial warranty has long since expired and the red ring of death warranty only has another year, I decided to crack the case myself.

It's not worth going through the details, but I did find two useful videos. The first is an overview of the problem. The second is a good tutorial on opening the 360's case. I used some Zap-A-Gap brand contact adhesive that I had laying around to actually reattach the magnet.

I felt quite proud to have diagnosed, researched, and fixed the problem on my own (without sending my console to Microsoft for repairs). $100 plus shipping just to have some intern apply some glue is a little extreme. So many people have had this problem that YouTube videos just refer to it as "the grinding noise problem." Either Hitachi (the drive manufacturer) just made a lousy drive, or Microsoft didn't correctly anticipate the effect that their game furnace would have on the glue that Hitachi used. I don't know who is to blame, but Microsoft should extend their warranty on the 360 to 3 years for all defects, not just those that cause the red LEDs to light up in a circular fashion. I don't expect my car to wear out in 2 years, and I use it every day. My game console shouldn't wear out, either.