Jason Fried asks “Would you pay $5/month for Google if it wasn’t free”?
Dave Winer says “Clone the Google API”.
I’d like to join those together and ask “Would you pay $80/month to own a part of the infrastructure that powered a free implementation of the Google API”? In other words, rent a dedicated (low-end) server from the likes of Layered Technologies, or EV1Servers, and use it to run a distributed part of the crawling, indexing, or querying part of a full-on search engine.
In his now-famous, A Plan for Spam, Paul Graham wrote:
… the spam of the future will probably look something like this:
Hey there. Thought you should check out the following:
http://www.27meg.com/foo because that is about as much sales pitch as content-based filtering will leave the spammer room to make. (Indeed, it will be hard even to get this past filters, because if everything else in the email is neutral, the spam probability will hinge on the url, and it will take some effort to make that look neutral.
I've been ongoing'ed, or is that Tim'ed – the ongoing version of being Slashdotted. The main difference is the smaller number of visitors, and the fact they're an even more geeky bunch (so far they all use Firefox or Safari).
Anyway, the link in question came from Tim Bray's Genx page. If you're looking for a way to generate well-formed XML from C/C++, and don't want to sprinkle your code with printfs, or equivalent, then check out Genx.
Several times over the last couple of weeks, I've found myself refactoring CSS – making changes to simplify and group rules, but with the aim of not changing the appearance at all. To test my work, I've been taking screen shots and swapping backwards and forwards to look for differences.
It suddenly struck me that what I really need is a CSS/HTML-specific diff tool. One that can take some CSS and some specific HTML, work out all the borders, margins, placements, etc.
A couple of recent weblog posts, at Microdoc News and Virtuelvis, have echoed something I've been meaning to write about for a while – that Google makes such heavy use of a page's contents when indexing a site.
For a long time this site had a PageRank of zero, although for a few days now, it's been four. What I noticed was that, despite have a PR of zero, and being linked from a vanishingly small number of other places, my articles were often turning up in the first page of Google results for two-word queries containing words from my article titles.
Recently, a bright orange book, with a beanied man on the front cover, arrived on my front verandah. I'm referring, of course, to Jeffrey Zeldman's new book, Designing With Web Standards, and the beanied man is none other than Zeldman himself. Forthwith, my mini-review of his book:
I wasn't quite sure what to expect from this book – all I knew was that it had sold out it's first printing, before I could get my hands on a copy.
I've recently been playing around with RSS – first by trying a few readers, then by setting up my own feed. I wasn't entirely happy with any of the readers I tried, so I've been building my own to more closely match my own needs, and to learn about RSS issues along the way.
It's pretty clear that everybody has a different idea about the semantics of any given version of RSS.