tag:blogger.com,1999:blog-322412382024-03-05T06:15:59.128-05:00Why not?Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.comBlogger82125tag:blogger.com,1999:blog-32241238.post-15316247206960145072016-01-25T22:18:00.000-05:002016-01-25T22:23:39.462-05:00Why you should (sometimes) NOT use tail recursion in Scala<p>There was a recent <a href="https://www.reddit.com/r/scala/comments/42koq6/why_you_should_use_tail_recursion_in_scala/">post on /r/scala</a> (<a href="https://anadea.info/blog/tail-recursion-in-scala">direct article link</a>) about how great tail recursion is. I agree with everything said in that article; this isn't an attempt to refute his points. But tail recursion has a dark side: it can be a huge hassle.</p>
<p>Non-tail recursive code has a very useful property: as each invocation on the stack completes, the previous invocation picks up <em>exactly</em> where it left off. So you can do some work, recurse, and then do more work when the recursion finishes. You can even recurse in a loop, meaning that the amount of work left to be done is dynamic and only known at runtime. In order to use tail-recursion, you have to restructure your code so that there is <em>no</em> work to be done after the recursive call returns. While it's always possible to restructure your code in this way, it can be a nontrivial transformation. In complex cases, you may even need to use your own stack to keep track of remaining work. Sure, it's not the call stack, so you don't need to worry about blowing up when your collection gets to be too large. But with that property comes a bit of complexity.</p>
<hr>
<p>For example, I recently wrote an iterative implementation of Tarjan's topological sort of a directed graph. The overall algorithm <a href="https://en.wikipedia.org/wiki/Topological_sorting#Tarjan.27s_algorithm">is described well enough on Wikipedia</a>. As you can see, the recursion occurs within a loop and with additional work to be done after all the recursion is complete.</p>
<p>Here's my Scala implementation. I won't promise that it's the best code, but it seems to work. I have more comments in the actual code, but I've stripped them here for brevity.</p>
<pre><code>def topologicalSort[A](edgeMap: Map[A, Set[A]]): Seq[A] = {
@tailrec
def helper(unprocessed: Seq[Seq[A]], inProgress: Set[A], finished: Set[A], result: Seq[A]): Seq[A] = {
unprocessed match {
case (hd +: tl) +: rest => // [ [hd, ...], ... ]
if (finished(hd)) {
helper(tl +: rest, inProgress, finished, result)
} else {
if (inProgress(hd)) {
throw new Exception("Graph contains a cycle")
}
val referencedVertices = edgeMap(hd)
helper(referencedVertices.toSeq +: (hd +: tl) +: rest, inProgress + hd, finished, result)
}
case Nil +: (hd +: tl) +: rest => // [ [], [hd, ...], ... ]
helper(tl +: rest, inProgress - hd, finished + hd, hd +: result)
case Nil +: Nil => // [ [] ]
result
}
}
helper(edgeMap.keys.toSeq +: Nil, Set.empty, Set.empty, Nil)
}
</code></pre>
<p>(Assume that <code>edgeMap</code> contains one key for every vertex in the graph, even if the corresponding value is the empty set. This is an invariant that is enforced elsewhere.)</p>
<p>That <code>unprocessed</code> parameter is, essentially, the call stack. The outer <code>Seq</code> is used as a stack, while the inner <code>Seq</code> is used as a queue. Whenever we decide to visit a node, we push a new "frame" onto the front of that stack (and move the node into <code>inProgress</code>). Whenever the frame at the front of the stack is empty, it means that we have finished recursing children and can move the "current node" (encoded as the first element in the next frame) from <code>inProgress</code> to <code>finished</code>. And when we reach a state where the stack contains just one frame, and that frame is empty, we are done.</p>
<p>While this implementation won't blow the stack (at least, I don't think it will... it successfully sorted a graph with a path 10k vertices long), I wouldn't necessarily describe it as easy to understand. In my actual source code, the comments are nearly as long as the implementation. That in and of itself isn't a problem, but it's a shame that the source isn't more readable. (Though I'll freely admit that perhaps the lack of readability is my own fault.)</p>
<p>In fact, an astute reader might notice that the <code>match</code> expression is missing a case. What happens if the sequence looks like this:</p>
<pre><code>[ [], [], ... ]
</code></pre>
<p>That is, why don't we have a pattern match clause that looks like this:</p>
<pre><code>case Nil +: Nil +: rest => ???
</code></pre>
<p>This particular case can never occur. An invariant of this implementation is that, apart from the first queue in the stack, no other queue can be empty. Again, this fact is pointed out in a comment... a comment that isn't needed in the non-tail recursive version.</p>
<hr>
<p>The author points out that tail recursion causes the compiler to rewrite your apparently recursive function as a loop. He also demonstrates a case where tail recursion shorter (and, I'd agree, more readable) than an explicit loop. But there are cases where the opposite is true. One that I've come across a few times is in what I will call "partitioning by type". Suppose you have a union type:</p>
<pre><code>sealed trait Shape
case class Circle(c: Point, r: Float) extends Shape
case class Rectangle(lowerLeft: Point, w: Float, h: Float) extends Shape
case class Triangle(v1: Point, v2: Point, v3: Point) extends Shape
case class Polygon(vs: Seq[Point]) extends Shape
</code></pre>
<p>And suppose you want have a <code>Seq[Shape]</code>. But you would like to split it into independent lists: a <code>Seq[Circle]</code>, a <code>Seq[Rectangle]</code>, a <code>Seq[Triangle]</code>, and a <code>Seq[Polygon]</code>. A tail recursive implementation might look like this:</p>
<pre><code>type GroupShapesByTypeResult = (Seq[Circle], Seq[Rectangle], Seq[Triangle], Seq[Polygon])
def groupShapesByType(shapes: Seq[Shape]): GroupShapesByTypeResult = {
@tailrec
def helper(remaining: Seq[Shape], circles: Seq[Circle], rectangles: Seq[Rectangle],
triangles : Seq[Triangle], polygons : Seq[Polygon]): GroupShapesByTypeResult = {
remaining match {
case Nil =>
(circles, rectangles, triangles, polygons)
case (hd: Circle) +: rest =>
helper(rest, circles :+ hd, rectangles, triangles, polygons)
case (hd: Rectangle) +: rest =>
helper(rest, circles, rectangles :+ hd, triangles, polygons)
case (hd: Triangle) +: rest =>
helper(rest, circles, rectangles, triangles :+ hd, polygons)
case (hd: Polygon) +: rest =>
helper(rest, circles, rectangles, triangles, polygons :+ hd)
}
}
helper(shapes, Nil, Nil, Nil, Nil)
}
</code></pre>
<p>Not terribly readable. But wait. We don't need to manage recursion ourselves; we could just use a fold:</p>
<pre><code>def groupShapesByType(shapes: Seq[Shape]): GroupShapesByTypeResult = {
val init: GroupShapesByTypeResult = (Nil, Nil, Nil, Nil)
shapes.foldLeft(init) {
(acc, shape) =>
val (circles, rectangles, triangles, polygons) = acc
shape match {
case c : Circle =>
(circles :+ c, rectangles, triangles, polygons)
case r : Rectangle =>
(circles, rectangles :+ r, triangles, polygons)
case t : Triangle =>
(circles, rectangles, triangles :+ t, polygons)
case p : Polygon =>
(circles, rectangles, triangles, polygons :+ p)
}
}
}
</code></pre>
<p>We've traded explicit loop management for more complex destructuring. Arguably more readable, but still not great. OK, what if we abandoned this functional approach (Scala is multi-paradigm after all) and went with an explicit loop and mutability instead:</p>
<pre><code>def groupShapesByType(shapes: Seq[Shape]): GroupShapesByTypeResult = {
var circles : Seq[Circle] = Nil
var rectangles : Seq[Rectangle] = Nil
var triangles : Seq[Triangle] = Nil
var polygons : Seq[Polygon] = Nil
for (shape <- shapes) {
shape match {
case c : Circle => circles = circles :+ c
case r : Rectangle => rectangles = rectangles :+ r
case t : Triangle => triangles = triangles :+ t
case p : Polygon => polygons = polygons :+ p
}
}
(circles, rectangles, triangles, polygons)
}
</code></pre>
<p>Since the patterns and corresponding bodies are so much simpler, I took the liberty of combining them into single lines. I don't think it hurts readability. I don't know for sure, but I would even expect this implementation to run faster than either other implementation. The tail recursive version needs to successively chop the front off our list. And the version with <code>foldLeft</code> needs to constantly decompose and rebuild the loop state variable. This implementation just walks an iterable and updates the corresponding sequence. Persistent collections are awesome, folds are awesome, but walking an iterator and updating vars is hard to beat.</p>
<hr>
<p>Again, I'm not trying to refute anything that the original post's author is saying. Tail recursion <em>is</em> great. But tail recursion comes with the cost of complexity. For situations with relatively shallow recursion trees (and with a clear upper bound to the recursion depth), I'm actually OK with non-tail recursion. For example, using non-tail recursion to traverse an XML document that is known to be fairly flat is perfectly fine. It might even be fine for traversing a parsed AST for a programming language. Sure, most programming languages allow expressions to be nested to arbitrary depths, but most code written by <em>reasonable</em> humans has a soft upper bound on how deeply those expressions are nested.</p>
<p>If you can naturally express your algorithm with tail recursion, go for it! But if it's unnatural, consider whether tail recursion is actually needed.</p>
Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-68292234026004417442015-09-04T02:16:00.002-04:002015-12-03T21:00:17.330-05:00How JetBrains Lost Years of Customer Loyalty in Just a Few Hours<p><b>NOTE:</b> This post was originally written after JetBrains <a href="http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/">announced</a> a controversial new licensing model. Many people spoke out about it. The next day, they <a href="http://blog.jetbrains.com/blog/2015/09/04/we-are-listening/">followed up</a> to say that they were listening to the feedback, and two weeks later <a href="http://blog.jetbrains.com/blog/2015/09/18/final-update-on-the-jetbrains-toolbox-announcement/">made a final post</a> with significant refinements from their original announcement. But the content in this post predated either of those follow-ups. Keep that in mind when reading.</p>
<hr>
<p>Yesterday's big news, at least for many developers, is that JetBrains - maker of popular tools like IntelliJ and ReSharper - is moving to a software-as-a-service subscription model for their products.</p>
<p>Previously, buying a JetBrains product got you a perpetual license and a year of upgrades. Once the license expired, any software you had received under that license would continue to work, but you would need to buy another license to get further upgrades. It was a simple model that worked just fine for many people, and most customers upgraded every year.</p>
<p>Starting November 2, though, that all stops. After that date, JetBrains will no longer sell these perpetual licenses. Instead, you can rent access to their software on a month-by-month basis.</p>
<p>And there was much raging.</p>
<p>Now, don't get me wrong. A subscription model has apparently been a common request, and some of the feedback to the announcement has been positive. This arrangement is especially good for consulting shops that do one project in C# and the next project in Java. Rather than committing to a year of use, they can choose to only pay for what they need in any given month.</p>
<p>But that's just one type of customer. There are also plenty of single-platform shops. I know a lot of people who just need ReSharper or IntelliJ. These customers will probably notice no big difference - they will renew for a year at a time, and probably get a small discount as well.</p>
<p>This all sounds great! What's the problem?</p>
<p>The first change, and probably the biggest, is that the software will apparently stop working when you stop paying for your subscription. That's probably going to impact indie developers the most. For a developer with an unstable income, it might be perfectly fine to stay on an older version of the software until they've stashed enough cash to afford the upgrade. That will no longer work. But it's not just indie developers. I've seen companies who forget to renew their licenses promptly or who have long and convoluted processes to approve the expenditure. I guess, under the new model, development grinds to a halt until the purchase goes through.</p>
<p>Another controversial aspect is that the software will need to phone home. Now, JetBrains has given a gracious window - the software only has to dial the mothership once every 30 days. And customers in an internet-restricted environment will be able to install a license server inside their network to manage the license pool. This is not an uncommon practice for enterprise or specialized software. But it does create an interesting challenge. The <a href="https://sales.jetbrains.com/hc/en-gb">licensing FAQ</a> indicates that it's allowed for an employee to use their personal license at work; I've often taken advantage of this. But it doesn't look like the JetBrains license server supports personal licenses. For people in an internet-restricted environment, it looks like this perk is no longer available.</p>
<p>OK, so users lose some ability that they previously had, but the software is cheaper, right? Customers win a little and lose a little, so maybe it's a wash. Yeah, the software is cheaper... sort of. My last IntelliJ upgrade was $99 for the year. Under the new model, I'll only pay $89 for a year. Huzzah! Well, that's only applicable for users who already own IntelliJ. New users will pay $119 per year, which is a lot less than the old, introductory price of $199. But here's the deal: if I ever let my subscription lapse, it looks like I end up losing my grandfathered discount. And even then, the prices given are all listed as promotional prices that are only good until Jan 31, 2016. Is this a sign that the prices will jump in the near future? JetBrains certainly tried to promote this new licensing model by saying that it would make their software more affordable. It does make it cheaper, especially for new users (which is great!), but the situation for existing users is a little more murky. It's only cheaper for me if I keep renewing promptly. If I ever miss a renewal, my yearly costs jump by 30%.</p>
<p>But none of those details really explain why the internet got so upset. I think JetBrains miscalculated just how much people like the current licensing model. Sure, offering a subscription-oriented model makes sense for some kinds of customers. But there are many other customers for whom a subscription model is going to be worse. JetBrains indicated that this change is being made primarily to provide a better service to their customers. The feedback that they got today is that many customers don't see the new scheme as an improvement. Now, JetBrains <a href="https://www.reddit.com/r/programming/comments/3ji148/jetbrains_toolbox_monthly_yearly_subscription_for/cuphdfr">has said (update 3)</a> that they would take the feedback under consideration, which is definitely a good sign.</p>
<p>It's always awkward when a company says "this is good for our customers" and the customers respond with "no it's not". We saw this a few years ago with Adobe. In that case, it <a href="https://www.youtube.com/watch?v=78yigV0GYGQ">was completely clear</a> that they didn't care what their customers wanted. They had decided on a course of action and nothing could stop that train. But I don't think anybody was surprised to see Adobe go in that direction. People liked Adobe's products, but I don't know that anybody really liked Adobe as a company. JetBrains was different. They built a loyal customer base on quality software and reasonable policies. JetBrains products had become the examples people used when saying "you know, open-source is great and all, but I'm happy to pay for quality software". When I read some of the responses to yesterday's announcement, I get the impression that existing customers feel a sense of betrayal. They're confronted with the idea that maybe JetBrains is no different from Adobe. Maybe all the goodwill that they felt for this company was misplaced.</p>
<p>Ultimately, JetBrains's response to this kerfluffle will show the underlying motivation behind this change. Will they listen to the feedback and truly offer licensing options that keep everybody happy? Or will they double-down on the software-as-a-service model, in the hopes that the controversy will just blow over?</p>
<p>Of course, listing the problems isn't super useful. If anybody from JetBrains reads this, I do have some suggestions for what you could do to appease the crowds:</p>
<ul>
<li>Continue to offer perpetual licenses. I don't think people are bothered by you offering subscription licensing; indeed, some customers seem to prefer it. But for customers who are happy with the status quo, forcing them to switch and threatening them with software that could suddenly stop working, it's a really hard pill to swallow.</li>
<li>Or... require that corporate licenses be subscription-based, but continue to offer perpetual, personal licenses. I'm guessing that most of the people upset with this change are people who are currently using personal licenses. These are probably your most loyal, and also most vocal, customers. These are the kinds of people that get your products into an enterprise environment. At least keep them happy.</li>
<li>Take another look at your pricing. You're asking users to replace perfectly functional software with software with a coin slot; if you stop feeding money into the meter, the software stops working. You have to give those users something in return. If you did something drastic - like cutting those prices in half - people might be far more willing to accept this software-as-a-service model.</li>
<li>Offering lower introductory prices is great! But you don't need to fundamentally change your pricing model to do that. You've offered sales before - I got my initial ReSharper and IntelliJ licenses during your end-of-the-world sale back in 2012. If you want to attract new users, you could just, you know, lower your buy-in price. Heck, you could even raise your renewal prices by 10%. I suspect that such a change wouldn't have even raised an eyebrow.</li>
<li><b>(Late edit after reading more comments)</b> As a reward for subscribing for a year or more at a time, issue perpetual licenses for products released during that time. If I subscribe for a month and then let my subscription lapse, my software stops working. But if I subscribe for a year and THEN let my subscription lapse, any software released during the window continues to work. This creates a situation where JetBrains keeps making money, but customers aren't punished for letting their subscription lapse.</li>
</ul>
<p>I was a huge Eclipse fan back in 2010, but a friend convinced me to switch to IntelliJ and I've been a loyal user since. I'm not writing as an outside observer, but as a concerned customer. Now, JetBrains doesn't really care about my business; I'm guessing that I pay for something like 4 of their developer hours per year. But people like my friend, and now me, are vital to JetBrains growing their business. I pushed and pushed to get ReSharper installed on all my coworker's machines; that ended up being something like 10 corporate licenses, which pays for a lot more development time. JetBrains got to where they are today by building a very loyal fanbase. I hope they realize that alienating that fanbase could tear them back down.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com40tag:blogger.com,1999:blog-32241238.post-26200907296578449952015-06-18T23:14:00.001-04:002015-06-18T23:18:39.123-04:00A quick overview of what WebAssembly is and what it is not (yet)<p>There's been a lot of buzz today about WebAssembly, and that's completely understandable. A bytecode form of Javascript has been on the minds of many web developers for a long time. But the online commentary seems to have been based more on hopes than on released information. I hope to clear up some of that confusion. Note that I'm not involved in the project at all, so some of this information is likely incorrect; feel free to leave a comment if I've gotten something wrong.</p>
<p>WebAssembly isn't a spec yet. It's not even a draft spec. It's an idea and a proof-of-concept. So far, that's all that's in the public. But it has a lot of promise.</p>
<p>The WebAssembly <a href="https://github.com/WebAssembly/design/blob/master/HighLevelGoals.md">roadmap</a> essentially spells out three phases:</p>
<ol>
<li>A minimum viable product (MVP) that is roughly analogous to asm.js, specifically targeting C/C++ as the source language</li>
<li>Additional features, such as threads, SIMD, and proper exception handling, all still with a focus on C/C++</li>
<li>"Everything else", including features meant to support more languages. One such feature is access from WebAssembly code to objects on the garbage-collected Javascript heap. This could enable WebAssembly code to access the DOM and web APIs (which would not be supported in either of the previous iterations). This phase also contains support for things like large heaps, coroutines, tail-call optimization, mmapping files, etc. If and when we get to this phase, I would expect it to get further split.</li>
</ol>
<p>There will also be a <a href="https://github.com/WebAssembly/design/blob/master/Polyfill.md">polyfill</a>, since browsers will not initially support WebAssembly. In fact, there is already a polyfill, but it's more of a proof-of-concept than an actual implementation. There is no WebAssembly spec yet; this polyfill is merely to test the viability of a binary-encoded AST.</p>
<p>Initially, it's not wrong to think of WebAssembly as binary-encoded asm.js, though that will change over time. It may eventually grow to the point that it could replace Javascript, but in its first two incarnations, you will still need some JS code (to interact with the DOM and with web APIs). A likely use case is to compile low-level, algorithmic code (think image processing, compression, or encryption code) to WebAssembly, but to still write the bulk of your application in plain JS. If you're not familiar with asm.js, it might make sense to look into it; many of the same limitations of that environment also appear to apply to the first incarnation of WebAssembly.</p>
<p>However, even at this early stage, WebAssembly does specify some features that even asm.js doesn't provide. In particular, it talks about 64-bit integer operations, which asm.js can't natively provide. The current polyfill POC doesn't seem to support them, and they may choose for the polyfill to always implement them as floating-point operations, but an actual runtime would need to provide proper support for int64 (and other sizes as well, like int8 and int16).</p>
<p>WebAssembly only deals with the binary format and runtime environment; it doesn't provide any new APIs for dealing with the DOM or the network or anything like that. It may eventually provide alternatives to other web APIs (WebWorkers would be an obvious one, when WebAssembly eventually adds threading support).</p>
<p>The current plan for WebAssembly is to not use bytecode in the same way as JVM or CLR use bytecode. Those VMs represent their bytecode as an instruction stream, similar to instructions for non-virtual machines. WebAssembly, on the other hand, appears to be going more for a "serialized abstract syntax tree" form. The claim seems to be that this can be compressed more efficiently than can be achieved using general-purpose compression routines, and that doesn't seem too crazy. This distinction isn't important for web app developers, though it could mean that "disassembled" WebAssembly is easier to grok than disassembled JVM bytecode.</p>
<p>WebAssembly will have defined binary AND text forms. So, while you won't be able to <code>curl</code> a WebAssembly file and immediately understand it, there will be tooling to convert from the binary form to the text form (which will certainly eventually be built into browsers). It seems likely that the bytecode semantics will make concessions to retain some degree of human readability when converted to text form.</p>
<p>I really like this <a href="http://arstechnica.com/information-technology/2015/06/the-web-is-getting-its-bytecode-webassembly/?comments=1&post=29227533">quote from Peter Kasting</a> in the comments on Ars. Like, seriously, if you have an Ars account, go give him some fake internet points.</p>
<blockquote>
<p>Notably, the people working on WebAssembly <em>are</em> the PNaCl (Google) and asm.js (Mozilla) teams. In some sense this can be considered the followon effort to those projects, meant to combine the best attributes from each, and in a way that can be agreed on by all browser vendors.</p>
<p>This is exactly how the system is supposed to work: individual teams try to advance the state of the art, and eventually all those lessons learned are incorporated into a new and better system. See e.g. SPDY -> HTTP2. WebAssembly draws on both the past work and the experience of all those involved, and wouldn't be what it is without them.</p>
<p>I say this partly so the sorts of people who have bemoaned "non-standard" vendor efforts in the past may have reason to pause next time they feel the urge to do so. No one wants a balkanized world forever, but that vendor-specific effort may gain the sort of real-world experience necessary to come back and design a great cross-vendor solution afterwards. All four major browser vendors have taken flak for this sort of thing in the past, and in my mind often unfairly so.</p>
</blockquote>
<p>All told, it looks like this is just the first, small step. I'm a little surprised that they went public so early. Either they really plan to develop it in the open, or the tech media got wind of it and blew it well out of proportion. Whatever the case, this looks like it will be a large undertaking. It's very promising that everybody seems to be at the table. And given that the browser vendors appear to want to actively develop their products, we might actually be able to use this within a couple of years.</p>
<p>What a time to be alive!</p>
Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com1tag:blogger.com,1999:blog-32241238.post-52805062380281583372015-06-16T01:55:00.000-04:002015-06-18T23:18:54.710-04:00The joys of parsing a toy programming language<p>In my personal time, I've been playing at building a toy programming language. Progress has been slow, but things are finally starting to come together. Last night, I ended up finding an interesting bug, and I wanted to document it here.</p>
<p>The syntax of my language isn't super complicated. Here's an example of a function:</p>
<pre><code>def foo(a, b) = a + b
</code></pre>
<p>Because this language is a so-called modern language, it has to support lambdas as well. Here's an example:</p>
<pre><code>val bar = (a, b) => a + b
</code></pre>
<p>You can call functions and lambdas using the same syntax:</p>
<pre><code>foo(5, 7)
bar(5, 7)
</code></pre>
<p>Naturally, since lambdas are first-class, we need to support more complicated call syntax as well. In particular, this should be allowed:</p>
<pre><code>((a, b) => a + b)(5, 7)
</code></pre>
<p>In order to support that, here's the section of the grammar that describes function calls (you can imagine the rest of the grammer):</p>
<pre><code>FnCall := Callable lparen Args rParen
Callable := identifier
:= lParen Expression rParen
Args := identifier MoreArgs
:= epsilon
MoreArgs := comma identifier MoreArgs
:= epsilon
</code></pre>
<p>My compiler was able to successfully compile and run this extremely simple program:</p>
<pre><code>val bar = (a, b) => a + b
bar(5, 7)
</code></pre>
<p>But imagine my surprise when it failed to compile this program:</p>
<pre><code>val bar = (a, b) => a + b
(bar)(5, 7)
</code></pre>
<p>Looking at the parse tree, it was obvious what had happened. It turns out that my grammar was ambiguous, and the parser had chosen this interpretation:</p>
<pre><code>val bar = (a, b) => a + (b(bar))(5, 7)
</code></pre>
<p>Rather than parsing this as a definition and a corresponding expression, it instead parsed it as a single definition. It assumed that <code>b</code> was a function of one parameter, which returned a function of two parameters. Even this simpler grammar has the same problem:</p>
<pre><code>FnCall := identifier lparen Args rParen
Args := identifier MoreArgs
:= epsilon
MoreArgs := comma identifier MoreArgs
:= epsilon
</code></pre>
<p>Given this program:</p>
<pre><code>val bar = (a, b) => a + b
(5 + 7) * 2
</code></pre>
<p>This could (and would) get parsed as:</p>
<pre><code>val bar = (a, b) => a + b(5 + 7) * 2
</code></pre>
<p>(It's worth noting that, if I had been using a shift / reduce parser, this would have been detected as a S/R conflict. But I'm not using a S/R parser; I'm using the Scala parser combinator library, mostly because it was easy to get started and I haven't outgrown it yet.)</p>
<p>Looking at Scala, from which I stole a lot of my syntax, I found that newline handling is a bit complicated, though the rules are laid out plainly in section 1.2 of the Scala language spec. Essentially, a newline can be treated either as plain whitespace or as a statement terminator, depending on the context. Within a parenthesized expression, newlines are always treated as whitespace. Outside an expression, newlines are treated as statement terminators if they appear between a token that could end a statement and a token that could begin a statement. One one hand, having written a fair amount of Scala, the rules feel pretty natural and intuitive. However, you do end up with strange behavior at the edges, as the following example demonstrates:</p>
<pre><code>(succ
(5)) => 6
{succ
(5)} => 5
</code></pre>
<p>On the other hand, Haskell (with which I'm admittedly not very familiar) appears to use indentation level to determine expression grouping. That is, while this is valid:</p>
<pre><code>main = putStrLn "hi"
</code></pre>
<p>and this is also valid:</p>
<pre><code>main = putStrLn
"hi"
</code></pre>
<p>this is not:</p>
<pre><code>main = putStrLn
"hi"
</code></pre>
<p>Context is not considered, so unlike Scala, this still fails:</p>
<pre><code>main = (putStrLn
"hi")
</code></pre>
<p>Both of these approaches have merit. Haskell's approach is natural enough, though there are some unambiguous representations that it would reject. Scala's approach is more permissive, at the cost of more complicated implementation rules and more surprising behavior. I can't tell which is more appropriate for my language.</p>
<p>For now, I took neither approach. I realized that, as far as I know, I have but one case where expressions are ambiguous - when the parameters to a function appear on a separate line from the <code>Callable</code> itself. It was possible for me to simply require that the whitespace between the <code>Callable</code> and its parameters not include any newlines. So an identifier at the end of a line will NEVER be considered to be a function call. I don't think this will remain this way forever, but I think it will get me unstuck.</p>
Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-19194423493791905632013-03-04T01:32:00.002-05:002013-07-24T01:45:12.659-04:00Updating a System Shock 2 Wallpaper for HD Resolutions (in Javascript!)<p>Back when I was in college, I was a big System Shock 2 fan. My favorite co-op experience of all time was when my dorm roommate and I played SS2 together. I had all kinds of ideas for case mods (even though I had neither the money nor the tools to make it happen). Ultimately, my only creative contribution to the world of Shock was to combine two wallpapers that were floating around the net into one of my very own:</p>
<p>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3o54OPMwl74k_LOZOFdeHGJ3oCMRPBfj46Wxn8sKtD9MUpKI4g4gyWNOM97crXGlz0dUGZzg5zaG8tQYEue-4tII1Lml2sup005aFWPg5gGvos5hGmduRddr4b3ezX4hZpcBr/s1600/Digital+Shodan+Blend+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3o54OPMwl74k_LOZOFdeHGJ3oCMRPBfj46Wxn8sKtD9MUpKI4g4gyWNOM97crXGlz0dUGZzg5zaG8tQYEue-4tII1Lml2sup005aFWPg5gGvos5hGmduRddr4b3ezX4hZpcBr/s320/Digital+Shodan+Blend+2.png" width="320" /></a>
</p>
<p>Yeah, I was pretty pleased with myself back in the day. In any case, I wanted to commemorate the recent <a href="http://www.gog.com/gamecard/system_shock_2">GoG re-release of SS2</a> by inviting Shodan to adorn my desktop yet again. Unfortunately, screen resolutions have increased quite a bit in the intervening years, and a pixelated Shodan simply won't do. Fortunately, in the GoG re-release, they included a ludicrously high resolution 5100x3338 pixel render. All I need to do is to scale that down, generate the ASCII half, blend them, and Bob's your uncle.</p>
<p>I'm sure that there are a lot of image to ASCII generators out there, but I can't shy away from a chance to learn something, so I decided to try to write my own. That's not even the interesting part of the story. Because I'm a masochist, I decided to do it with HTML and Javascript. I figured that, between the drag-and-drop API, canvas, and a high-performance JS engine like V8, I could probably get away with it.</p>
<p>Many hours later, I have something that basically works. I'll probably clean it up and get it posted to GitHub. It wasn't too hard to allow dropping an image file onto the page. I end up doing a lot of work against a scratch canvas before finally dumping the output into an image element using the <a href="http://www.w3.org/TR/html51/embedded-content-0.html#dom-canvas-todataurl">HTMLCanvasElement toDataUrl method</a>. This is great; I can then drag the image off the page and onto my desktop (something that the canvas element doesn't automatically do). Even though the data URL is ridiculously long, it correctly displays on the screen. However, when I was working with the original 17 megapixel image, I found that dragging the output image out of my browser would immediately crash the Chrome tab. Fortunately, Chrome has no problems with the image at my target resolution (1920x1080).</p>
<p>Because this extremely long data URL feels pretty sketchy, I looked to see if there was a way around it. I would love to output the results to a canvas element instead of an image. All I need is to use the DnD API to make the canvas a valid drag source. Of course, in order to do that, I need to be able to generate the PNG bytestream, as well as synthesize a File object in the browser. While it's definitely possible to build a pure-JS PNG encoder, I don't see any way to synthesize a File object. Although the DnD spec specifically asks for a File, maybe it would be happy with an arbitrary Blob instead; I don't know, and I haven't yet tried. If the spec doesn't support this use case, it's a shame; I can think of a number of cases where it would be neat to generate a file from client-side JS.</p>
<p>I like to think that my skills of an artist have improved in the intervening years as well. A little stylistic shading, and here is the result.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3ND7GEqmDYBXIgUzT7WF17BhhsjVL75dXMtODJY6GMujOFNi-IUwsCg-V1lu7Uyu0KYujLbDNwofqsTiri9VjJwweGFSqSmkTK7ZU5tG-SHHzZuoROBpG1mWJ42RfT7indecG/s1600/Shodan+Blend+3.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3ND7GEqmDYBXIgUzT7WF17BhhsjVL75dXMtODJY6GMujOFNi-IUwsCg-V1lu7Uyu0KYujLbDNwofqsTiri9VjJwweGFSqSmkTK7ZU5tG-SHHzZuoROBpG1mWJ42RfT7indecG/s320/Shodan+Blend+3.png" /></a></p>
<p>I used a different technique to generate the digital side (the original used 0s and 1s and modulated the intensity on a pixel-by-pixel basis; I achieve my shading by choosing from a larger palette of characters). Still, I feel like the end result has the same tone as the original. And just like last time, I'm pretty pleased with myself. Let me know what you think!</p>
<p><b>Edit:</b> The code is available <a href="https://github.com/balefrost/img2ascii">on GitHub</a>. You can try it out <a href="http://balefrost.org/programming/img2ascii/">on my site</a>.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-77646671752329737512012-01-14T15:57:00.001-05:002012-03-02T22:27:23.452-05:00Using Apache on Mac OS X to serve files outside ~/Sites<p>I'm working on a web project that basically contains just static HTML and Javascript. (well OK, there's also one, small PHP script, but it might be going away in the near future). I tend to keep all my source code in ~/src, but to host it, I also need it to appear in ~/Sites. After some small trial-and-error, I ended up putting the everything (git repo and all) into ~/Sites, and then symlinked it to ~/src. It wasn't pretty, but it worked.</p>
<p>So I just did some reorganization that pretty much invalidated that old structure. In particular, I have moved everything that needs to be deployed into ~/src/project/web. However, I want it to be accessible via http://localhost/me/project. I tried physically moving the project back into ~/src, and then making a symlink to the subdirectory, but that didn't work. Apache would still produce 403s for all the relevant files. So I had to roll up my sleeves and dive into Apache configuration.</p>
<p>Before I go further, I'm compelled to pull out the old soapbox. I have painfully little experience with Apache - I have never had to configure or support it in a production environment, and that makes me happy. From this position of ignorance, I have decided that Apache is a dinosaur that should have died a long time ago. For example, instead of configuring the server from the request's point of view (as has been popularized by Rails' routing logic), it is configured from the filesystem's point of view. The default Mac OS configuration has, buried somewhere in the middle of the file, a directive that disallows the serving of all files under /. Because, I guess, they would be served by default if that directive wasn't present? But still, nobody seems to want to spend the time to produce a replacement web server, and so we struggle on. </rant></p>
<p>The default Apache install on MacOS 10.7 uses a split apache configuration file. The bulk of the configuration is in /etc/apache2/httpd.conf. However, each user also gets their own /etc/apache2/users/me.conf, which are all imported into the main configuration. And while the main configuration file specifies the FollowSymlinks option, I discovered that the same is not true in my personal config file. All I had to do was to add the FollowSymlinks option to that configuration file, restart Apache, and everything started working.</p>
<p>So if you have only basic web serving needs, the default config should suffice. If, however, you want/need to spread the files around your disk, you need to mess with the Apache configuration.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-68988614825771101472011-09-05T21:40:00.001-04:002011-09-05T21:41:25.221-04:00Mysterious, Blank User in 10.7 Sharing Dialog<p>I wanted to copy some files from my PC to my Mac. When I went to turn on SMB sharing, I came across this:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNPew15BD4jutmam0wlnOMfn38O1-KOK7HwbKXTLmOmbKh5b-8V9HJhGP_JhQ4AnbawliAvbdn_d3LrM0TRfoeLyUaIUU_xbcs2fyCB6mIqgbvKn9GvJz9-g_fCs9jHjtn-VMt/s1600/Screen+Shot+2011-09-05+at+9.23.38+PM.png" imageanchor="1" style=""><img border="0" height="339" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNPew15BD4jutmam0wlnOMfn38O1-KOK7HwbKXTLmOmbKh5b-8V9HJhGP_JhQ4AnbawliAvbdn_d3LrM0TRfoeLyUaIUU_xbcs2fyCB6mIqgbvKn9GvJz9-g_fCs9jHjtn-VMt/s400/Screen+Shot+2011-09-05+at+9.23.38+PM.png" /></a></p>
<p>I was wondering about the identity of this phantom user. It turns out that it is the macports user. He doesn't show up on the login screen. He never showed up under 10.6. Apparently, Apple changed something about the way users are reported to applications.</p>
<p>If it bothers you, you can fix it with dscl.
<pre>sudo dscl . -create /Users/macports RealName macports</pre></p>
<p>The first parameter is the machine you want to administer; . is apparently a shortcut for localhost. Then we give the command - we want to create a new key. Then we specify where this key should be created - in this case, the macports user's Directory Services path. Next is the name of the key - RealName is what appears to be used by the sharing dialog. (RealName is also assigned on users you create through System Preferences). Finally, we provide a value for this user's name. Now, we have this:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFz0FyrAToH3gMUJs4QF5aFosbDmge7_geRSR3JJ61CdYO65x8cGzMZK3mKcuZb9IsoeoDtyb2n4g8UXmWz5x0kUKr_2j3HmBh_plODK5teVqpIoDquhWS4IjyCaOVGRKvAsRn/s1600/Screen+Shot+2011-09-05+at+9.37.49+PM.png" imageanchor="1" style=""><img border="0" height="339" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFz0FyrAToH3gMUJs4QF5aFosbDmge7_geRSR3JJ61CdYO65x8cGzMZK3mKcuZb9IsoeoDtyb2n4g8UXmWz5x0kUKr_2j3HmBh_plODK5teVqpIoDquhWS4IjyCaOVGRKvAsRn/s400/Screen+Shot+2011-09-05+at+9.37.49+PM.png" /></a></p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-49372509894596583432010-09-14T00:42:00.000-04:002010-09-14T00:42:53.053-04:00DES Encryption As Used in VNC Authentication<p>A few notes about how DES is used in the VNC Authentication security type (type 2)</p>
<ol>
<li>
DES is used in ECB mode.
</li>
<li>
The ECB Key is based upon an ASCII password. The key must be 8 bytes long. The password is either truncated to 8 bytes, or else zeros are added to the end to bring it up to 8 bytes. <a href="http://www.vidarholen.net/contents/junk/vnc.html" title="The DES encryption used by VNC servers">As an additional twist</a>, each byte in flipped. So, if the ASCII password was "pword" <code>[0x 70 77 6F 72 64]</code>, the resulting key would be <code>[0x 0E EE F6 4E 26 00 00 00]</code>.
</li>
<li>
The VNC Authentication scheme sends a 16 byte challenge. This challenge should be encrypted with the key that was just described, but DES in ECB mode can only encrypt an 8 byte message. So, the challenge is split into two messages, encrypted separately, and then jammed back together.
</li>
</ol>
Here is some pseudocode (in Erlang) that should explain better than words can.
<pre><code class="language-erlang">password_to_key(Password) ->
Flipped = lists:map(fun flip_byte/1, Password),
Truncated = truncate(Flipped, 8),
pad(Truncated, 8, $\0).
encrypt_challenge(Password, Challenge) ->
Key = password_to_key(Password),
<<High:8/binary, Low:8/binary>> = Challenge,
EncHigh = crypto:des_ecb_encrypt(Key, High),
EncLow = crypto:des_ecb_encrypt(Key, Low),
<<EncHigh/binary, EncLow/binary>>.</code></pre>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com3tag:blogger.com,1999:blog-32241238.post-28358492808387418302010-07-28T22:03:00.001-04:002010-07-28T22:04:25.118-04:00Unpacking a Safari Extension<p>So, now that Safari extensions are official (and not just a developer curiosity), I decided to see what people had managed to make over at the <a href="http://extensions.apple.com/" title="Apple - Safari - Safari Extensions Gallery">extension gallery</a>. It looks like there are some cool ideas out there. I was somewhat interested in the <a href="http://maypril.se/safariextensions/Exposer-1.1.safariextz" title="Exposer">Exposer</a> extension, which sounded a bit like Exposé for Safari. It seems like it kinda works, except that it doesn't always bring up the list of windows, and it's also really slow (it looks like <a href="http://developer.apple.com/safari/library/documentation/UserExperience/Reference/SBrowserTabClassReference/SafariBrowserTab/SafariBrowserTab.html#//apple_ref/javascript/instm/SafariBrowserTab/visibleContentsAsDataURL" title="SafariBrowserTab Class Reference">visibleContentsAsDataURL</a> is the culprit, natch, plus I have dozens of tabs open at a time).</p>
<p>Anyway, while I was checking it out, I realized that I had no idea what some of these Safari extensions were doing in the background. Stop and think for a moment; do you really want to run code that some Jimmy wrote in his basement to be able to watch everything that you do in your browser? Maybe I'm just paranoid, but I'd like to know what is really going on.</p>
<p>So, naturally, I tried unpacking an extension. It wasn't particularly hard, but you have to realize that a .safariextz file isn't a ZIP archive. It's a XAR. I know; I opened it up in my <a href="http://www.suavetech.com/0xed/0xed.html" title="0xED">hex editor</a>.</p>
<p>Here's how you can unpack one for yourself:</p>
<code class="command">
xar -xvf <span class="placeholder">extension</span>.safariextz -C ~/Desktop
</code>
<p>Don't worry; there's a directory just inside the safariextz archive. Now to see if there's anything malicious in these extensions. (Exposer looks clean so far.)</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-4118341727244527422010-06-28T23:33:00.000-04:002010-06-28T23:33:50.536-04:00What I learned today<p>Drybrushing comes first.</p>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQIezsnS7sGnV7X6TfhSxTx3WlffQG55V6vBrn76sxXxYdEXyXUVGCZAd7PJtjwTaIfVVFZABC_a_TZtIYQ4bixRGUSBNCGOTSGHgSInq_3Fvu4Bi02y1Pj4ls76Tm6h-Ub0rf/s1600/Metal.png" />Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com2tag:blogger.com,1999:blog-32241238.post-63878416776493837212010-04-29T19:48:00.000-04:002010-04-29T19:48:32.682-04:00Even More Space Marines<p>I've spent some more time on my Space Marine squad (it's going on something like 6 months at this point - I just don't get a ton of time to paint). Anyway, the tactical squad is basically done. Notice the highlights.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioSv7K30Pf9_sNxQHC3akGoc-023Tu-4vQE5E6Hbf5VvPqjEWSIoUWXZhNxuRSnX6EC1w-mBBAKYwNU3lx4FpdPnG0tMtL-m4wTuWNGQOgFxlBELWwXqhvkQvS2oL6_6DTFHmk/" /></p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBLkrfS42id-HCM0K9zPRiy9VQYmP9-ErBorVGr0pHnBkzniRv2tMSjaFSxbabHkRcbReXAgGVkrm0nfkXuCV4rXsBZGg6ICt7PPsxDfsCnIZW0jlKqftZzGzlTkmyQOR5j3r4/" /></p>
<p>In addition, I started to work on some figures from <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat200190&prodId=prod1570027">Assault on Black Reach</a>, including some terminators. In particular, I spent quite a while trying to get the white helmets to look good. I'm also pretty pleased with the eyes. No Golden Demons here, but not bad for tabletop.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_FlqiMcuBCCD1ChXk3r2YyHxTWNUaptGmEijV9sBAcnGDjnX1_AV4CnCVRxlLEeAr186nCp410d1IMY7P3Ru9Egun69OZ1nQn8CYH-MEAhEPWv1lb8zsFOXE6B5lB2tL8e15g/" /></p>
<p>Finally, I had a spare marine sitting around, so I decided to make him up as a Blood Angel. I have two copies of <a href="http://www.boardgamegeek.com/boardgame/54625/space-hulk-third-edition">Space Hulk</a> that I want to paint, but I wasn't going to do so until I was ready. I think I'm almost ready. Also, another shot of the terminators' eyes.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcT3KdmwhOF6X2S1_Un02zU_tug3ei_D7ppdMfWgnkYWsFK4WeG5Kc8JEoTow_rxryHRpgIZnzGtbgwyep_EqlMSxu44jKR9TNTIsC2Mv2PqoT5euGSt8YsDPopb3kpEy0okp_/" /></p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-8187457858284316742010-04-15T09:02:00.001-04:002010-04-15T09:02:33.999-04:00Things I Learned While Debugging an SSL Issue<ul>
<li>SSL is sometimes actually TLS. SSL is apparently on the way out, though TLS is only supported in a subset of common browsers. Fortunately, both use the same kind of certificate, so it's mostly transparent.</li>
<li>Java 1.6u17 removed SSL client support for MD2-signed root certificates. Except it sometimes didn't. Some u17 installs worked for me, some failed. 1.6u19 failed every time. If you have a Java client connecting to a SSL server, make sure that the server certificate was generated against a SHA1-signed root certificate.</li>
<li><a href="http://www.wireshark.org/" title="Wireshark · Go deep.">WireShark</a> will analyze both SSL and TLS. If there's any confusion about what is coming from the server, WireShark can help you figure it out.</li>
<li>The server sends the whole certificate chain to the client. I had thought that this was the case, but I had a hard time finding the documentation that spells it out. In the end, I used WireShark to find out.</li>
<li>Web browsers sometimes lie. When I would ask the web browser for the certificate chain, it would tell me something different from what the server actually sent. The root certificate from the server was signed with SHA1, but the browser would tell me that it was signed with MD2. This occurred in Internet Explorer, Firefox, and Safari. This was also a red herring that caused me to waste a lot of time.</li>
<li>Make sure you are looking at the right server. I had made an assumption about how the Java client software talked to the server, and that assumption was incorrect. In the end, the problematic certificate was on a different server altogether. Go figure.</li>
</ul>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-46281554325216199062010-01-29T12:53:00.000-05:002016-11-01T10:51:36.236-04:00Deriving the Y Combinator in Erlang - Part 2: Abstraction<p>This is the second post in a series on the Y combinator. <a href="http://bytecrafter.blogspot.com/2010/01/deriving-y-combinator-in-erlang-or.html" title="Why not?: Deriving the Y Combinator in Erlang (or Recursive, Anonymous Functions for the Masses)">Part 1</a></p>
<hr/>
<p>In the <a href="http://bytecrafter.blogspot.com/2010/01/deriving-y-combinator-in-erlang-or.html" title="Why not?: Deriving the Y Combinator in Erlang (or Recursive, Anonymous Functions for the Masses)">last post</a> on the Y combinator, we established that some functional languages (such as Erlang) make it hard to have recursive, anonymous functions. Namely, there is no name that is bound to the "current" anonymous function. Furthermore, we established that we could work around this problem by passing the anonymous function to itself. When we concluded, we had derived this much:
<pre class="code">
Helper = fun(Helper2, N) ->
case N of
1 -> 1;
_ -> N * Helper2(Helper2, N - 1)
end
end,
Fact = fun(X) -> Helper(Helper, X) end
</pre></p>
<p>This post will focus on extracting the Y combinator from the above code. We will follow a sequence of transformations, each with a specific intent. In the end, we will hopefully have some beautiful code.</p>
<h4>Reducing the Number of Arguments</h4>
<p>This factorial algorithm started with a function that took a single parameter. This function called itself with successively smaller values, until it reached a base case. However, we muddied the waters by passing the function around as well. We would like to return to having a one-parameter function. We can accomplish this by creating another level of closure:
<pre class="code">
Helper = fun(Helper2) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * (Helper2(Helper2))(N - 1)
end
end
end,
Fact = fun(X) -> (Helper(Helper))(X) end
</pre>
Now it is clear that our recursive function is really only calling itself with one parameter.</p>
<h4>Simplifying the Recursive Function Call</h4>
<p>The place where we make the recursive call is rather ugly. We have to build the function upon which we will recurse before we can actually call it. It would be better if the recursive function was explicitly named. We can do that, too:
<pre class="code">
Helper = fun(Helper2) ->
fun(N) ->
case N of
1 -> 1;
_ ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
N * PAHelper2(N - 1)
end
end
end,
Fact = fun(X) -> (Helper(Helper))(X) end
</pre></p>
<p>We call the new identifier <code>PAHelper2</code> to suggest that it's a partially-applied version of <code>Helper2</code>.</p>
<p>We can simplify the body further by moving <code>PAHelper2</code> to a higher scope.
<pre class="code">
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
fun(N) ->
case N of
1 -> 1;
_ -> N * PAHelper2(N - 1)
end
end
end,
Fact = fun(X) -> (Helper(Helper))(X) end
</pre>
</p>
<p>As an aside, you might find this step overly complicated. You may ask, why did we not simply define <code>PAHelper2 = Helper2(Helper2)</code>? Why the extra indirection? We don't actually want to evaluate <code>Helper2</code> just yet. In fact, if we were to do so, we'd end up in an infinite loop. If the first step of <code>Helper</code> is to immediately call <code>Helper2</code> (an alias for <code>Helper</code>), we'll be recursing on ourself with no way to ever terminate.</p>
<h4>Extracting the Anonymous Function</h4>
<p>Right now, the body of our algorithm is embedded deeply within some necessary plumbing. We would like to extract our algorithm from the center of this. This is quite easy:
<pre class="code">
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
FactRec = fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end,
FactRec(PAHelper2)
end,
Fact = fun(X) -> (Helper(Helper))(X) end
</pre></p>
<p>Now we can pull it outside the body of <code>Helper</code>.
<pre class="code">
FactRec = fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end,
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
FactRec(PAHelper2)
end,
Fact = fun(X) -> (Helper(Helper))(X) end
</pre>
</p>
<h4>Simplifying Fact</h4>
<p>We're getting close, but the definition of <code>Fact</code> still leaves something to be desired. However, in order to make it simpler, we have to first make it messier. Start by moving <code>Helper</code> inside the definition for <code>Fact</code>:
<pre class="code">
FactRec = fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end,
Fact = fun(X) ->
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
FactRec(PAHelper2)
end,
(Helper(Helper))(X)
end
</pre></p>
<p>Our goal is to build a general-purpose function <code>Y</code> that takes a function <code>F
</code> and produces a self-recursive version of that function. Right now, the innermost part of <code>Helper</code> makes an explicit reference to <code>FactRec</code>. We want to eliminate that explicit reference:
<pre class="code">
FactRec = fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end,
Fact = fun(X) ->
Y = fun(Proc) ->
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
Proc(PAHelper2)
end,
Helper(Helper)
end,
(Y(FactRec))(X)
end
</pre></p>
<p>Now that we've done this, <code>Y</code> no longer has any bound variables, so we can pull it out of <code>Fact</code> completely:
<pre class="code">
FactRec = fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end,
Y = fun(Proc) ->
Helper = fun(Helper2) ->
PAHelper2 = fun(Z) -> (Helper2(Helper2))(Z) end,
Proc(PAHelper2)
end,
Helper(Helper)
end,
Fact = fun(X) ->
(Y(FactRec))(X)
end
</pre></p>
<p>Of course, if we want, we can simplify some of these definitions. <code>Y</code> can become a normal Erlang module function (rather than a function value). <code>Fact</code> itself can be curried - we can eliminate the noise of the explicit parameter. Also, <code>FactRec</code> doesn't need to be named anymore - it can become the anonymous function that we originally intended:
<pre class="code">
y(F) ->
G = fun(G2) ->
F(fun(Z) -> (G2(G2))(Z) end)
end,
G(G).
Fact = y(fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end)
</pre></p>
<p>Unfortunately, the <code>y</code> function only supports functions that take a single parameter. Some languages have a "splat" operator that can be used to represent "all the parameters;" unfortunately, Erlang does not. Instead, it can be useful to define a family of y functions that deal with functions taking more than one parameter:
<pre class="code">
y2(F) ->
G = fun(G2) ->
F(fun(Y, Z) -> (G2(G2))(Y, Z) end)
end,
G(G).
</pre></p>
<h4>Conclusion</h4>
<p>We have shown the difficulty in defining recursive, anonymous functions in Erlang. We showed a simple solution to this problem, and then generalized the plumbing to make it easier to use. While this is not necessary in all functional languages, I hope that this is useful to anybody working in a strict language, such as Erlang.</p>
<p>I am planning more posts on this topic. One post will explain the strange step I took in Simplifying the Recursive Function Call. Others will explain just what a fixed point is, what the fixed point combinators are, and how the Y combinator satisfies the definition of a fixed point combinator.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com3tag:blogger.com,1999:blog-32241238.post-86381438811138546082010-01-18T14:18:00.004-05:002016-11-01T10:51:34.013-04:00Deriving the Y Combinator in Erlang - Part 1: Intuition<p>This is the first of a series on the Y combinator. <a href="http://bytecrafter.blogspot.com/2010/01/deriving-y-combinator-in-erlang-part-2.html" title="Why not?: Deriving the Y Combinator in Erlang - Part 2: Abstraction">Part 2</a></p>
<hr/>
<p>When I heard about the <a href="http://en.wikipedia.org/wiki/Fixed_point_combinator" title="Fixed point combinator - Wikipedia, the free encyclopedia">fixed point combinators</a> before, I didn't know what to make of them, so I filed the topic away in my brain. However, when I was working on implementing continuations in Erlang, I ended up building a small structure that reminded me of the Y combinator. With a little massaging, I extracted the actual Y combinator, and proceeded with what I was working on.</p>
<p>The actual definition of the Y combinator is insanely dense:
<blockquote><code>Y = λf·(λx·f (x x)) (λx·f (x x))</code></blockquote>
This is precisely why I didn't understand it at first - that definition means nothing to me. We need a more intuitive way to think about it.</p>
<p>Suppose you decide to write the factorial function in Erlang. A simple (by which I mean unoptimized) implementation might look like this:
<pre class="code">
fact(N) ->
case N of
1 -> 1;
_ -> N * fact(N - 1)
end.
</pre>
There's nothing particularly complicated here - we're just calling <code>fact</code> recursively. But what happens if you try to make <code>fact</code> into a fun (an anonymous function in Erlang). Watch:
<pre class="code">
Fact = fun(N) ->
case N of
1 -> 1;
_ -> N * ??? (N - 1) %%How do we call ourselves? Fact isn't yet bound!
end
end.
</pre>
In some languages, we could replace the <code>???</code> with <code>Fact</code>. Unfortunately, Erlang doesn't let you do this. If you tried, Erlang would say that <code>Fact</code> is unbound. This is true - until we've finished defining the fun, we can't assign it to <code>Fact</code>. Other languages provide you with a magic variable that represents the current function (Javascript has <code>arguments.callee</code>). Again, as far as I know, Erlang doesn't provide such a variable. Does that mean that we have no hope?</p>
<p>Let's look at this problem one step at a time. We need something to stand in for the <code>???</code>. We need a name that represents the current, anonymous function. Where can we get that from? In functional Erlang, there are only three ways that names are bound - by closure, by parameter, or by local definition. We can't close over it, because the anonymous function isn't yet defined. We can't create a local definition, because the local scope is too narrow a scope for that. That leaves only one possibility - we need to pass the anonymous function to itself.<p>
<p><pre class-"code">
Helper = fun(Helper2, N) ->
case N of
1 -> 1;
_ -> N * Helper2(Helper2, N - 1)
end
end.
Fact = fun(N) -> Helper(Helper, N) end.
</pre>
OK, so we created a helper function - more on that in a minute. <code>Helper</code> (formerly <code>Fact</code>) now takes an extra parameter, which just creates another name for the current, anonymous function. Since we intend for that to be the same as <code>Helper</code>, we call it <code>Helper2</code>. We know that <code>Helper</code> is a fun/2. Since <code>Helper2</code> is supposed to be another name for <code>Helper</code>, then <code>Helper2</code> must be a fun/2 as well. This means that, when we call <code>Helper2</code>, we need to also tell it about itself - that is, we need to pass <code>Helper2</code> along when we call <code>Helper2</code> to recurse.
</p>
<p>Now that leaves us to deal with the function <code>Fact</code>. Clearly, <code>Fact</code> needs to call <code>Helper</code>. We noted that <code>Helper</code> is a fun/2, so again, we need to call it with two parameters. The intent of the extra parameter to <code>Helper</code> was to be able to pass <code>Helper</code> to itself, so we do just that.</p>
<p>Believe it or not, we have just derived the concept behind the Y combinator. We have invented a scheme that allows an anonymous function to know itself, even in a language that doesn't directly support this. This is (I believe) the purpose behind the Y combinator. However, we're not yet done. There is still some cruft that we would like to eliminate. In particular, we hand-built the plumbing to route <code>Helper2</code> around. We would like to use higher order functions to eliminate this. This is what the Y combinator does - it manages the plumbing of anonymous functions that refer to themselves.</p>
<p>In the <a href="http://bytecrafter.blogspot.com/2010/01/deriving-y-combinator-in-erlang-part-2.html" title="Why not?: Deriving the Y Combinator in Erlang - Part 2: Abstraction">next part</a>, we will continue the derivation of the Y combinator in Erlang. Our goal is to eventually be able to write something like this:
<pre class="code">
Fact = y(fun(Self) ->
fun(N) ->
case N of
1 -> 1;
_ -> N * Self(N - 1)
end
end
end).
</pre>
It's not perfect, but in a language that doesn't directly support anonymous function recursion, it's not too bad!</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-37374030389496189222009-12-07T14:52:00.000-05:002009-12-07T14:52:05.428-05:00Even More Space Marine Painting<p>I've been slowly working on my Space Marines. It's taken a while, but they almost look like a unit. I think I've spent between 10 and 20 hours on them, but much of that was spent learning. Most of the major painting is done, and now it's time for touchups and details. For example, I spent some time on that rocket launcher to make it appear to be metal, painted red, and then worn. I think it's pretty convincing. I intend to do the same with that red bolter. The one that's not wearing armor needs a lot of work. Enjoy.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKfquRFrPfTY_dNNlC2_4zrYZ3z06lKRO2nUH3s-b3cHg2qB9md5LGM7-GnizNmzIO5CiEudogCY-TlJjOapxmHFexi0Y12KH9YcPnXNcslSTI3TgKRXIcFnZiGwd-HNGAlSvb/" /></p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBlCg1QaOmYlTc3ZCkGWYav7TT4_bfLvBfZVXF4SqqU-71xeucEfyUl4bqEvPqUiqhA4uykRX7uLxKVtF7wlRYQJYl3KupiJjxvwUYDG4QxMmUbAgX2sJ04cgeItwC_DNGBTEN/" /></p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEQOkh0w_9sMUESyAJESUtePmizEpAfnl8lZ_nrAS5ZS-LKTIIsCt3A5HvPWMl5UFSdcyLfuSjqQMAwQB5pddsr5IIV03kFx6_cwj_PEU6N49V4n9A6qIlKDcKUi7meTpgU1lS/" /></p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRMFsnrSAXoZDwHQIwo2duSCsV3iHSKGCgZLMFECJaDKpxweBxCWOs6I0l2ynTyFpPobHbJtsO0Je24DQMh99y2cuJHcm2dmdOS2SXZVggnHJe9FmD_t8BdbF_oORhmr_0Ectv/" /></p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com1tag:blogger.com,1999:blog-32241238.post-38922673481972125832009-11-17T18:08:00.000-05:002009-11-17T18:08:10.014-05:00Standard algorithms and boost::ptr_vector<p>I did something bad the other day.</p><p>OK, I can't tell if it was bad. In another environment, it would have been bad, but since this was C++, perhaps it was OK. I was in the situation where I had a <a href="http://www.boost.org/doc/libs/1_40_0/libs/ptr_container/doc/ptr_vector.html" title="Boost Pointer Container Library">boost::ptr_vector</a>, and I wanted to use a standard algorithm on it. Specifically, I wanted to use <a href="http://www.cplusplus.com/reference/algorithm/partition/" title="partition - C++ Reference">std::partition</a> to separate the objects that were still "alive" from those that were "dead" (where alive and dead are domain concepts in our application). The complexity here is that ptr_vector is a crazy container.</p><p>Most containers deal with a specific type T. You add Ts to the container. Dereferencing an iterator gives you a T&. It's generally assumed that a container operates on a single type, and the standard algorithms make this assumption.</p><p>The ptr_vector, on the other hand, appears to be two containers at once. Semantically, it's analogous to a std::vector<managed_ptr_type<T> >. It is intended that, by adding a pointer to the ptr_vector, the ptr_vector takes ownership of the lifetime of the memory at the end of the pointer. So, it is a container of pointers. On the other hand, when iterating a ptr_vector, it appears that it is a container of Ts.</p><p>In my case, I wanted to rearrange my ptr_vector. In particular, I wanted to partition the pointers into those whose object was still "alive", and those whose object was "dead". Since a ptr_vector is semantically a container of pointers, it made sense that I should apply std::partition to the ptr_vector. However, ptr_vector::iterator removes a level of indirection: instead of iterating T*, it iterates T&.</p><p>In fact, ptr_vector doesn't seem to provide any ways to rearrange the pointers once they are put into the container. Sure, you can mutate the object on the end of the pointer. You could operate at that level. But there doesn't appear to be a safe way to treat the ptr_vector as a container of pointers.</p><p>Fortunately, ptr_vector provides a back door. Its iterators support a base() method, which will return an iterator over T* (instead of an iterator over T&). This allows us to treat the ptr_vector as a container of pointers, and to use standard algorithms to manipulate those pointers. Now, this is not without peril. While it seems to be OK to rearrange the pointers, it wouldn't be safe to change the set of pointers. I wouldn't trust using something like <a href="http://www.cplusplus.com/reference/algorithm/remove_if/" title="remove_if - C++ Reference">std::remove_if</a>, because it might leave garbage in the container after it is done. The container might contain duplicate pointers. Some pointers might get dropped completely. If the container then goes out of scope, it will try to delete these pointers multiple times, which would be a bad thing. It might also fail to delete some pointers, because they were overwritten (and not preserved elsewhere in the container).</p><p>This whole thing felt like the best solution possible, while at the same time leaving a lot to be desired. I felt like I was violating the encapsulation of the ptr_vector. I suppose this is one of those cases for which they put in the base() methods on the iterators. Additionally, I don't see any clear way that they could do better. For example, I think an assumption of ptr_vector is that a given pointer only occurs inside it at most once. The standard algorithms don't necessarily respect this assumption; see my commentary on remove_if in the previous paragraph. The standard algorithms, in some cases, expect more freedom than ptr_vector can provide. This disconnect is unfortunate, but not without reason.</p><p>An important first step to helping with this problem would be to add methods to ptr_vector (and its siblings) that allow you to treat it as a container of pointers. You could add, remove, and re-arrange the container using these methods. In addition, they could maybe provide specializations of some of the standard algorithms for each container. This is difficult for third party developers to do, since the actual type of a ptr_vector::iterator is implementation defined. The boost guys can cleanly provide a specialization of std::partition for this kind of iterator, but I can't. Now, this isn't perfect. It would help with the standard algorithms, but not third-party algorithms. Still, it would be a great step in the right direction.</p><p>So, did I do something bad, or did I do something necessary?</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com3tag:blogger.com,1999:blog-32241238.post-2708171801023269572009-11-02T12:49:00.002-05:002009-11-02T12:49:47.479-05:00Why Google Experience phones are pretty awesome<p>As Android has grown, devices fall into one of two major classifications. Some devices are so-called "Google Experience" devices (featuring the phrase "with Google" somewhere on the device). Other devices are, well, not Google Experience devices. What is the difference? I've had a hard time figuring it out.</p><p>I think that Google Experience phones are updated by Google itself, while the rest of the devices are supported by the phone's manufacturer. I have an original G1 (a Google Experience phone), and I've gotten prompt updates as each new Android OS version has been released. This is similar to the experience that iPhone users enjoy.</p><p>Some devices, such as the HTC Hero and the Motorola Cliq (and the HTC Magic in certain regions), are not Google Experience phones. These phones were released with heavily customized software (such as HTC's Sense UI or Motorola's Motoblur). These customizations, while attractive to some users, also make it much harder for the phone manufacturer to update to a new version of the base Android OS. Both the Hero and the Cliq shipped with Android 1.5, and I don't believe that there are announced plans to update either to 1.6 (or 2.0, for that matter).</p><p>At first, I thought that the notion of a Google Experience phone was silly. At the time, the Magic was launching on Rogers with Exchange support, and that somehow disqualified the phone as being a Google Experience device. I now understand that Google Experience really means "unforked code base". In order to add Exchange support, I suspect that HTC had to fork and modify the standard Mail app. While they were able to add a feature that people wanted, it really just makes these phones into some sort of mutant Android device. No thank you. Google should really make it clear to users that the Google Experience is a feature in and of itself.</p><p>Android, at this point, is a rapidly evolving platform. Google Experience phones seem to be the best way to keep up with this evolution. I was pleased when I heard a Verizon rep say that the Droid will be a Google Experience phone. Now they just need to release a T-Mobile US GSM version, and I'll be happy. Over time, Android evolution will slow down, and then it might make sense for a manufacturer to fork the Android code base. Maybe they would even be willing to contribute back to the core distribution. But, until then, I'm sticking with Google Experience devices.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-19921307468026894362009-11-02T12:13:00.000-05:002009-11-02T12:13:41.838-05:00Fixing hard disk clicking / aggressive head parking on Mac OS X<p>I recently bought a Western Digital Scorpio notebook hard drive to put into my 2007-vintage Macbook Pro. Everything seemed fine at first. However, as I used my laptop, I noticed that it would frequently make a quiet clicking noise. At first, I thought that I had gotten a bad disk. However, after doing a little research, it became clear that this is a common problem. This clicking is a "normal" operational noise - it is the sound of the heads parking.</p><p>People say that you should just get used to the noise. However, <a href="http://dougitdesign.com/blogs/blog_12_08_mac-clicking-hdd-hard-drive-noise-hdapm.html" title="Does your Apple notebook hard drive (HDD) ever sound like little mice are playing table tennis inside of it? Or, why your HDD might be pre-programmed for quick failure.">this blog post</a> makes the argument that every one of these clicks is killing your hard disk. Some people claim that this is related to the sudden motion sensor that's built into most (if not all) Apple portables. However, this is a red herring. The disk still clicks even if it is sitting on a table. It is the hard disk's own built-in power management that is causing the head parking. The disk's SMART statistics record the number of head parking cycles. If you want to see this for yourself, you can use either <a href="http://sixtyfive.xmghosting.com/products/smartctl/" title="Sixty Five, Ltd. » smartctl">this menu extra</a> or <a href="http://sourceforge.net/apps/trac/smartmontools/wiki" title="smartmontools">this command line tool</a> (<a href="http://smartmontools.darwinports.com/" title="Smartmontools version 5.38 - How to Download and Install on Mac OS X">MacPorts</a>). You are looking for the Load Cycle Count value.</p><p>To explain the problem (as I understand it), modern hard disks have some responsibility to manage their power consumption. One manifestation of this is to spin down the platters and to park the read/write heads. The operating system can influence the time before the heads are parked by setting the "APM Level" of the drive to a value between 0x00 and 0xfe. Each drive manufacturer is free to interpret this value as they see fit. Mac OS X seems to set a default APM Level for all disks, and I think this value is 0x80. This is fine with Apple-shipped disks, but not necessarily for third party disks.</p><p>But wait! Perhaps you have bought the same kind of drive that Apple ships in their laptops. Are you safe? Not necessarily. Allegedly, Apple flashes their own firmware onto the the hard disks that they install at the factory. That's right, you're not running stock disk firmware. My suspicion is that this firmware changes the drive's interpretation of the default APM level. Recently, there was <a href="http://www.apple.com/downloads/macosx/apple/firmware_hardware/harddrivefirmwareupdate20.html" title="Apple - Downloads - Firmware & Hardware - Hard Drive Firmware Update 2.0">a firmware update</a> from Apple that fixed this problem on disks that were shipped by Apple. Unfortunately, you can't use this utility to flash the new firmware onto a non-Apple drive.</p><p>Right, so the two solutions that I see are either:<ol><li>Write our own firmware</li><li>Set a different APM Level value</li></ol>Obviously, option 2 looks much more attractive. Bryce McKinlay wrote a utility called <a href="http://mckinlay.net.nz/hdapm/" title="hdapm Download page">hdapm</a> for doing just that. He even includes a launchd configuration to run hdapm as the system starts. One thing not mentioned in the readme is that you need to get the permissions of the launchd config file correct. The file needs to be owned by root (preferably root:wheel), and must not be group- or world-writeable. I also changed the config file a little; here is my version:</p><pre><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>hdapm</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/hdapm</string>
<string>disk0</string>
<string>max</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
</pre><p>The biggest change is that I removed the "LaunchOnlyOnce" and "ServiceDescription" keys. I didn't see a reason to load it only once, and ServiceDescription seemed undocumented. This solution isn't perfect, however. First of all, hdapm uses a seemingly undocumented back door to adjust the APM setting. Ideally, we would actually spawn a daemon that continuously monitors and adjusts the drive's APM level. I'm not yet convinced that Mac OS X won't override my setting. Still, I have been running with this configuration for a couple of days, and things seem to be working well.</p><p>I have an open support issue with Western Digital to see if they have a fix for this issue. If there were some way that we could change the way the disk behaves under OSX, we could forego the additional software, which would be great. I've also heard of a utility called wdidle, which allegedly lets you write new idle settings to the hard drive. However, I was unable to find any official site for this software, so I'm not using it.</p><p>Finally, I would like to thank two people. First, <a href="http://dougitdesign.com/blogs/blog_12_08_mac-clicking-hdd-hard-drive-noise-hdapm.html" title="Does your Apple notebook hard drive (HDD) ever sound like little mice are playing table tennis inside of it? Or, why your HDD might be pre-programmed for quick failure.">Doug Aghassi's post</a> really explained the symptoms that he was experiencing and put me on the right track for solving the problem. Thanks, Doug. Also, Bryce McKinlay was kind enough not only to write the hdapm utility, but also to answer the questions that I emailed to him. Thanks, Bryce.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com5tag:blogger.com,1999:blog-32241238.post-91416213331251091952009-09-20T11:57:00.000-04:002009-09-20T11:57:20.545-04:00More Painting Space Marines<p>I have an update to my Space Marine painting. I've continued to shade the marines. After the previous coat of 1:1 <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810878">Regal Blue</a> to <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810879">Ultramarine Blue</a>, I added the following:</p>
<p>
<ul>
<li>1:2 Regal Blue to Ultramarine Blue</li>
<li>Ultramarine Blue</li>
<li>2:1 Ultramarine Blue to <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810893">Codex Grey</a></li>
</ul>
</p>
<p>
Each coat is painted in a slightly smaller area. The goal is to shade the model to match an imaginary light source. In my case, my light source is directly above the model. Here are the results:<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6AQAVa9UZrz1BTK69dMFPEiqgskLPjYXikzi035b94-NGBsdJIv52FESwBwHlhsoGum0Ittr5viAnyb7Mgw7r_nQWyitnGWY_VdZ7nLRiLeRaRJRCa1KtIodnzTjUOU9dJLjj/" /><br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhv-R55YWherlakEHzWtliTAPteeRKiNv9Enj7RFdZPqT3-NPU8-Y2DTcURVRd1mQdd9rLlf36CjD8r1BGI-zdrSrHpSGQcCZdxhSnnrVN47uDVNhV8dwFGk_vPp0UK-WP0m0Hm/" /><br />
</p>
<p>I'm now only painting 2 marines. I'm going to finish them before starting others, so that I can improve on my technique for those later models.</p>
<p>
I also started painting my Tyranid spores. This has been basecoated with Chaos Black spray, then painted with <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810857">Blood Red</a>, then washed with Chestnut Ink. I had tried Red Ink, but unfortunately, that color is too close to Blood Red.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtoFQC39falHVIbnLx0baVlqxOq3glyRu8A4JhsmVxa_OHAmLEuFpGUn6kYrhDow977fI1Nb-weYGQ9pqMHmtjDYfEQbZG-wY0qFBu3to99ThYYO0yKj2HgJao9VistiPCABMP/" />
</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-80247803506009571762009-09-16T22:00:00.000-04:002009-09-16T22:00:31.470-04:00Painting Space Marines<p>For some reason, I got an itch to paint some <a href="http://en.wikipedia.org/wiki/Warhammer_40,000">Warhammer 40k</a> figures. I've been slowly assembling them over... I don't know, maybe 2 years. Hey, I have a lot of things that I do in my free time. Anyway, I finally got around to priming them the other day, and I've been painting like a crazy person since then.</p>
<p>Now, I should mention that these aren't the first figures I've painted. I had painted a squad of 5 marines that came in a box along with 6 paints and a brush. Here's one of them.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLKCe-p9VxMk06IhFNkCEE6Spv0JORc7H1FmbAbE28l05G6E95K1KPuol8Aw11kCU9X6po3T-FEXSB9HNt_WjblZ8ml2yR_5RC0GfO7P5JysuNmrXzRokmKGbEMn1yd6PZNh7w/" /></p>
<p>That was a good learning experience. This time, I have quality glue, basing material, spray primer, a variety of brushes, and lots of paints and inks. I'm posting some photos of various steps of the process. Hopefully, by putting them here, I will actually finish painting them.</p>
<hr />
<p>
This space marine has been assembled, based, and primed. The basing material is sand and rock, glued into place with regular white glue. The whole model is sprayed with a flat black primer, to help the other paint stick better.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdo0MTVrC1Hkqs0vuiLLO5TcIrlb3D9VWHDDQ4vXVXZ5QMzZ0uPZGmmY8-lnPoYXxcFJ46nV71pk3fQwbI82xMHc4B4d-A4iBEgoXjIpppQIYWb4O9ZMXFTRy3QOxBZbuxNZ3E/" />
</p>
<p>
This marine has had his armor painted with <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810879">Ultramarine Blue</a>. Actually, his feet are missing some paint, but I'll get around to that. Additionally, the black ground cover has been drybrushed to look like sand and rocks. I started with rocky sand, painted it black, then painted it to look like rocky sand again. Crazy? Probably.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAJzn2JAhLdTBv_d4Zmrxr5vgnF0hnXJX8_ViqhofMRue38JIRMLGKqOJcPXPXHkbfiQl-TV7iLMEm7fXK3C5aS_zEcgx36sUtfkd_KiUOc40ywNtKomLEp8BCYRZ-D91mZpSZ/" />
</p>
<p>
This marine has been washed with a blue ink (which I think has been replaced with <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat1210000&prodId=prod1340014">Asurmen Blue Wash</a>). Ink is used because it settles in the crevasses and provides great depth.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo8JYfT-edLRwkejzjohhxhbXPwCEY3BPRx85LN2wUBGI2JtSgzSsZWb4cR05AEebTB8ilycShZJncgARXnWejt389j7rU0p3FpHLbHQs7oJrmgha5JsyCB8AESmmZPYi4-8CA/" />
</p>
<p>
This marine has had his armor panels painted with a 50/50 mix of <a href="http://www.games-workshop.com/gws/catalog/productDetail.jsp?catId=cat160002&prodId=prod810878">Regal Blue</a> and Ultramarine Blue. By leaving a slight gap around the edges, the blue wash peeks through the armor panels, and this looks great.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqN2d2iG8ChCETfwmOgVTO01HlmSWnkgOZDwLbGRD_mRp1SEhmFwwwX_VWrWfr6CmfK2k7KyXn8ujn396vXMq97G0xAWwra1WQM1nQOkQi59Wwok8agpi5Wp2sv7Ov_dJ6bzgj/" />
</p>
<p>
For those who don't know much about 40k, these figures are pretty small. Here's a comparison shot.<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi06Y7S0skIqChRrg1RT6p6fYB80sI_MCL6q2uFhyphenhyphenPbyFI5yt7Q57f1mL4BBcQ2sVtnFk9Mv9c8zvkSnCfZLQOMHTQg94NPhoi381-7U14Ii5xleeZFateShywS6rTBjMjMmgoa/" /><br />
Now imagine trying to paint those eye lenses. Yeah, I'm not looking forward to it, either. Besides the eyes, I still have a lot to do. I plan to put another few layers of blue on the armor, paint the shoulder pad edges, drybrush the metal pieces, and so on. If anybody reads this and has feedback or suggestions, I'd love to hear from you!</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-11955075539821757562009-08-07T13:36:00.003-04:002009-08-07T13:40:32.955-04:00I wish Blogger would let me rename tagsOccasionally, if you read this via RSS, you will notice that I repost old articles. This isn't intentional - any time I edit or retag a post in Blogger, it puts it back on the feed. I don't see any way to say "quietly republish this post". The same problem occurs if I want to rename a tag - I have to remove the tag from all posts, and then re-add it to all posts. Please, Google, add features to Blogger to make this less painful.Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-13009473978242887642009-08-07T12:49:00.005-04:002009-08-07T13:36:43.549-04:00T-Mobile Visual Voicemail Problems<p>T-Mobile recently released their <a href="http://www.cyrket.com/package/com.oz.mobile.android.voicemail.application" title="Cyrket - T-Mobile Visual Voicemail">Visual Voicemail</a> application in the <a href="market://search?q=pname:com.oz.mobile.android.voicemail.application">Android market</a>. It had some launch problems, but those are mostly smoothed out at this point. The app works pretty well, and I'm glad that they have finally implemented it. However, the app does have its share of first-release problems. They are listed here, in the order that I hope T-Mobile addresses them.</p><p><strong>VVM doesn't work with Wifi.</strong> Most people probably have Wifi enabled on their phones. After all, it's the most efficient (both bytes / time and power / byte) way to tranfer data. However, VVM doesn't work with Wifi. It will neither notify you of new voicemails, nor will it download new messages. In order to make it work, you need to <ol><li>Turn off Wifi</li><li>Wait or click "Synchronize Voicemails"</li><li>Turn Wifi back on</li></ol>Call me crazy, but that's just stupid. At the very least, the "Synchronize Voicemails" button should do those steps for you, similar to the <a href="http://apps.t-mobile.com/">My Account</a> app. There has been some FUD about the reason for this omission. Some people claim that it's for "security". I'm going to make this perfectly clear: there is no security-related reason to prevent people from getting their voicemails over Wifi. It's easy to encrypt data that is transmitted over the internet. There are a number of possible reasons they don't support Wifi. It might just be too much work. Maybe they haven't had time yet. It might be cost prohibitive. Perhaps there is a technical restriction - they would need to read the SIM's IMSI into the app, which Android might not allow. Whatever the case, it's not a security issue.</p><p><strong>VVM has a separate notification icon.</strong> Every time you get a new voicemail, you get the standard voicemail icon. In addition, you get a new VVM icon. For now, this is fine. If I have Wifi enabled, I still get notified of a new voicemail (via the standard voicemail icon). When the Wifi issue is fixed, however, I would like to see the new icon go away. The notification bar is crowded enough.</p><p><strong>VMM uses the wrong audio stream.</strong> VVM uses the "media" audio stream. Many people complain that prevents you from using a bluetooth headset to listen to your VM. I don't have a bluetooth headset, so I can't confirm this. It should use the "phone" audio stream.</p><p><strong>The UI needs polish.</strong> There are some small look-and-feel issues:<ol><li>The VVM status bar icon doesn't match the <a href="http://developer.android.com/guide/practices/ui_guidelines/icon_design.html" title="Icon Design Guidelines | Android Developers">Android UI Guidelines</a>.</li><li>After pressing the "Synchronize Voicemails" button, there is no feedback. No spinner, no progress bar, nothing.</li><li>The long-press context menu on a VM does not include a "delete" option (only Open, Reply As, and Copy To)</li><li>The buttons that appear when you press the "Menu" button have no icons.</li><li>The "Copy to" screen is a little too technical. The file name defaults to vm<i>n</i> (i.e. vm0, vm1, vm2). It should instead default to something like "Voicemail from John Smith on 22 Jul, 2009". In addition, the save directory defaults to "/sdcard". Should users really be exposed to UNIX pathnames? Clicking the Save in Directory dropdown presents me with a file browser for my SD card. For me, this lists locations like ".Trashes" (I use a Mac), espeak-data (the data files for the <a href="http://eyes-free.googlecode.com/" title="eyes-free - Project Hosting on Google Code">Text-To-Speech engine</a>), "where" (the data for <a href="http://where.com/" title="WHERE GPS Mobile Application, iPhone App & Location Based Services Development Platform">Where</a>), and other places that I probably shouldn't be saving random files. Do I really need to be able to specify the location to save the voicemail? Why not just save everything to /sdcard/voicemails? Or at least, why not assume that all voicemails get saved to /sdcard/voicemails or a subdirectory (i.e. you can't save a voicemail outside this directory, only inside)?</li></ol></p><p><strong>Initial, first-run experience is lousy.</strong> When I first installed and ran the app, it wasn't able to connect to the server. After disabling Wifi, it worked. I was taken to a set-up screen, but then got distracted by something and hit the back button. When I relaunched the application, the set-up screen wasn't presented. This worried me (is there some setup that needed to occur), so I uninstalled the app and re-installed it. I don't think I did any harm, but the app didn't behave as I expected, so I didn't know what to think.</p><p><strong>Deleting doesn't always work.</strong> I'm going to chalk this up to glitch behavior. The first time I used the app, I went through and deleted some old messages. Then I went into the analog voicemail system, and they were back! I deleted them a second time, and now they're really gone. <em>shrug</em></p><p>Now, I don't mind all of those problems. I'm glad that T-Mobile finally released a VVM app. I'm glad that they released it early, warts and all. I hope that they are not done working on it. For me, the Wifi issue is huge. I'm connected to Wifi 90% of the time, and that means that the VVM app doesn't function as a voicemail app 90% of the time. I suspect many other people are in the same boat as me. Furthermore, Google Voice is coming. If the Wifi issue isn't fixed by the time GV is generally available, I might just jump ship, and T-Mobile doesn't want me to do that. I understand if T-Mobile can't fix this on their own - they might need support from Google. Still, every carrier is going to want to provide VVM, and it would behoove Google to provide whatever support necessary.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com1tag:blogger.com,1999:blog-32241238.post-40033214743498969312009-07-18T20:37:00.004-04:002009-07-18T20:49:57.577-04:00Getting Started with Stack Overflow<p>I joined <a href="http://stackoverflow.com/" title="Stack Overflow">Stack Overflow</a> shortly after it launched, but I didn't do anything with it. I found it in search results here and there, but I never asked any questions. I would have done more, but new users are pretty helpless. You can't vote up or down, you can't comment on answers, you can't post an answer with more than 1 link, etc. It's almost like you're not wanted. Compared to the relative freedom of Wikipedia, it was really demoralizing to me.</p><p>I decided tonight to actually try to get some reputation. Most of the interesting stuff happens around 50 reputation, so that's my goal. I answered 2 questions this evening. Suddenly, my rep is skyrocketing. I'm <a href="http://stackoverflow.com/users/120278/daniel-yankowsky" title="User Daniel Yankowsky - Stack Overflow">at 31</a> right now, and I bet that will continue to climb on its own. It seems that people are very willing to vote your answers up if they are relevant. As you can see, it shouldn't be hard to get to the point of actually being able to contribute.</p><p>So, if you want to get started with Stack Overflow, here are my suggestions.<ol><li>Go to <a href="http://stackoverflow.com/questions?sort=newest" title="Hottest Questions - Stack Overflow">the newest questions page</a>.</li><li>Find something that you know something about. Don't troll, and don't post to random topics about which you know nothing.</li><li>Write an answer.</li></ol>That should just about do it. Don't despair, it's easier than it initially seems.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-69031095041170280802009-07-16T02:06:00.003-04:002009-07-16T02:34:51.020-04:00Fixing the Xbox 360 's Grinding Noise / Tray Ejection Problem<p>I spent an evening performing unexpected surgery on my Xbox 360. When I put a game in, the drive made the most horrible grinding noise. On top of that, the drive would not stay closed. The tray would almost always eject seconds after being closed. Research led me to conclude that the rare earth magnet that is part of the disc clamp had probably become unglued. Since my initial warranty has long since expired and the red ring of death warranty only has another year, I decided to crack the case myself.</p><p>It's not worth going through the details, but I did find two useful videos. The first is <a href="http://www.youtube.com/watch?v=9WQP66Xvvko">an overview of the problem</a>. The second is <a href="http://www.youtube.com/watch?v=Uj2A7mGGjuY">a good tutorial on opening the 360's case</a>. I used some Zap-A-Gap brand contact adhesive that I had laying around to actually reattach the magnet.</p><p>I felt quite proud to have diagnosed, researched, and fixed the problem on my own (without sending my console to Microsoft for repairs). $100 plus shipping just to have some intern apply some glue is a little extreme. So many people have had this problem that YouTube videos just refer to it as "the grinding noise problem." Either Hitachi (the drive manufacturer) just made a lousy drive, or Microsoft didn't correctly anticipate the effect that their game furnace would have on the glue that Hitachi used. I don't know who is to blame, but Microsoft should extend their warranty on the 360 to 3 years for all defects, not just those that cause the <a href="http://en.wikipedia.org/wiki/Red_ring_of_death">red LEDs to light up in a circular fashion</a>. I don't expect my car to wear out in 2 years, and I use it every day. My game console shouldn't wear out, either.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0tag:blogger.com,1999:blog-32241238.post-91608512425529478942009-07-15T00:02:00.003-04:002009-07-15T00:15:56.443-04:00Feedback Works!<p>It's hard to say if this is coincidence, but it's worth mentioning. A few weeks ago, I sent eBay feedback on their new site design. My gripe was that they were nesting scrollable areas within scrollable areas. As I was using the site today, it dawned upon me that the bad behavior was gone. Somebody actually fixed it. Perhaps it was my email; perhaps it was the combined voice of thousands of users; perhaps some developer just realized that there was a better way. Whatever the case, I'm glad that eBay made the change, and I like to think that I helped in some way. Thanks, eBay!</p><p>Before, things just felt wrong. I would be scrolling the page, and it would mysteriously stop. This happened on every single page, it seemed. When they switched to good design, I didn't even notice at first. It just felt natural. This brings to light a sort of design axiom: design successes are invisible, design flaws are glaring.</p>Danhttp://www.blogger.com/profile/06099373265709774874noreply@blogger.com0