Monday, December 29, 2008

Windows 7 beta 1: sound does not work on Macbook Pro (RealTek HD Audio)

This post is googlebait: I couldn't find the solution for this on google, so here's how I solved it. Hopefully others will be spared the messing around.

After installing windows 7 beta 1 (7000) on my macbook pro, and installing the bootcamp drivers off the Leopard Disc, as well as The vista 2.1 bootcamp update, everything worked very nicely... Except sound.

Win7 detected "High Definition Audio Device" and everything looked like it should have worked, but no sound came out of the speakers.

After much mucking around, here's what I did:

Go into the leopard drivers folder. There should be a directory called Drivers, and under that is a file called RealTekSetup.exe. If you try run this normally, it will fail.

What I did next was:

  • Right click it, and select Troubleshoot Compatibility
  • Click Next and wait for it to finish 'Detecting Issues'
  • Select The program Worked in earlier versions of windows...
  • Select Windows Vista
  • Click Next a few times, let the Realtek installer run, reboot, and Presto!
  • As for win7 itself? Well, the beta is faster, nicer, and all around better than vista. I'll never go back. They didn't do a good enough job of copying the dock... but it's still miles ahead of vista, and that's another blog post.
    Byebye!

    Monday, September 29, 2008

    Embedded IronRuby interactive console

    Screenshot!

    What this is, is a small dll which you can add to any .net winforms project. When run, it brings up the interactive console, and you can poke around with your app. It's running live inside your process, so anything your app can do, it can do. I thought this was kind of cool :-)

    How to get it going:

    1. Download and build IronRuby by following the instructions on IronRuby.net - I built this against IronRuby SVN revision 153. As of RIGHT NOW the current revision is 154 which doesn't build.
    2. Download the Embedded IronRuby project from the following URL - you can use SVN to check it out directly from there. (I'm assuming familiarity with SVN in the interests of brevity)
      http://code.google.com/p/orion-edwards-examples/source/browse/#svn/trunk/dnug/ironruby-presentation/EmbedIronRuby
    3. Open the EmbeddedIronRuby/EmbeddedIronRuby.sln file in visual studio, and remove/add reference so that it references IronRuby.dll, Microsoft.Scripting.dll, Microsoft.Scripting.Core.dll, and IronRuby.Libraries.dll. These will be in the IronRuby build\debug folder that you will have built in step 1.
    4. Compile!
    5. For some reason, when you compile, Visual Studio will only copy IronRuby.dll, Microsoft.Scripting.dll and Microsoft.Scripting.Core.dll to the bin\debug directory. It also needs IronRuby.Libraries.dll in that directory (or in the GAC) to run, otherwise you get a stack overflow in the internal IronRuby code when you run it.
      The joys of alpha software I guess :-)
    6. Run the app and click the button!
    You can also add this embedded console to your own app. Just stick all the dlls in your app's folder (or the GAC) so it can see them, add a reference to EmbeddedRubyConsole.dll, and in your app do this: new EmbeddedRubyConsole.RubyConsoleForm().Show();

    Credit: Some of the 'plumbing' code (the TextBoxWriter and TextWriterStream) come from the excellent IronEditor application. Full credit to and copyright on those files to Ben Hall. Thanks!

    IronRuby Presentation!

    I recently gave a presentation to my local .NET user group about IronRuby.

    Click on the image to download the slides as a PDF file.
    Note: This was exported from keynote with speaker notes, which I've revised slightly since giving the presentation.

    As part of this, I demoed a small library I wrote which gives you a live interactive ruby console as part of your running app.

    Basically it lets you poke around your program and modify things while it's running. I'll post the code and notes about that shortly

    Tuesday, July 29, 2008

    Ruby Unit Converting Hash

    I'm currently working on a project where I need to convert from things in one set of units to any other set of units ( eg centimeters to inches and so forth)

    I had a bunch of small helper functions to convert from X to Y, but these kept growing every time we needed to handle something which hadn't been anticipated.

    This kind of thing is also exponential, as if we have 4 'unit types' and we add a 5th one, we need to add 8 new methods to convert each other type to and from the new type

    A few hours of refactoring later, I have this, which I think is kind of cool, and will enable me to delete dozens of small annoying meters_to_pts methods all over the place.

    Disclaimer: This is definitely not good OO. A hash is not and never should be a unit converter. In the production code I will refactor this to build an actual Unit Converter class which stores a hash internally :-)

    
    # Builds a unit converter object given the specified relationships
    #
    # converter = UnitConverter.create({
    #  # to convert FROM a TO B, multiply by C
    #  :pts    => {:inches => 72},
    #  :inches => {:feet   => 12},
    #  :cm     => {:inches => 2.54, 
    #              :meters => 100},
    #  :mm     => {:cm     => 10},
    # })
    #
    # You can then do
    #
    # converter.convert(2, :feet, :inches) 
    # => 24
    #
    # The interesting part is, it will follow any links which can be inferred
    # and also generate inverse relationships, so you can also (with the exact same hash) do
    #
    # converter.convert(2, :meters, :pts) # relationship inferred from meters => cm => inches => pts
    # => 5669.29133858268
    #
    class UnitConverter < Hash
      
      # Create a conversion hash, and populate with derivative and inverse conversions
      def self.create( hsh )
        returning new(hsh) do |h|
          # build and merge the matching inverse conversions
          h.recursive_merge! h.build_inverse_conversions
          
          # build and merge implied conversions until we've merged them all
          while (convs = h.build_implied_conversions) && convs.any?
            h.recursive_merge!( convs )
          end
        end
      end
      
      # just create a simple conversion hash, don't build any implied or inverse conversions
      def initialize( hsh )
        merge!( hsh )
      end
      
      # Helper method which does self.inject but flattens the nested hashes so it yields with |memo, from, to, rate|
      def inject_tuples(&block)
        h = Hash.new{ |h, key| h[key] = {} }
        
        self.inject(h) do |m, (from, x)|
          x.each do |to, rate|
            yield m, from, to, rate
          end
          m
        end
      end
      
      # Builds any implied conversions and returns them in a new hash
      # If no *new* conversions can be implied, will return an empty hash
      # For example
      # {:mm => {:cm => 10}, :cm => {:meters => 100}} implies {:mm => {:meters => 1000 }}
      # so that will be returned
      def build_implied_conversions
        inject_tuples do |m, from, to, rate|
          if link = self[to]
            link.each do |link_to, link_rate|
              # add the implied conversion to the 'to be added' list, unless it's already contained in +self+,
              # or it's converting the same thing (inches to inches) which makes no sense
              if (not self[from].include?(link_to)) and (from != link_to)
                m[from][link_to] = rate * link_rate 
              end
            end
          end
          m
        end
      end
      
      # build inverse conversions
      def build_inverse_conversions
        inject_tuples do |m, from, to, rate|
          m[to][from] = 1.0/rate
          m
        end
      end
      
      # do the actual conversion
      def convert( value, from, to )
        value * self[to][from]
      end
    end
    

    I'm not sure if deriving it from Hash is the right way to go, but it basically is just a big hash full of all the inferred conversions, so I'll leave it at that.


    Update

    Woops, this code requires 'returning' which is part of rails' ActiveSupport, and an extension to the Hash class called recursive_merge!, which I found on an internet blog comment somewhere (so it's only fitting that I share back with this unitconverter)

    Code for recursive_merge

    
    class Hash
      def recursive_merge(hsh)
        self.merge(hsh) do |key, oldval, newval|
          oldval.is_a?(Hash) ? 
            oldval.recursive_merge(newval) :
            newval
        end
      end
      
      def recursive_merge!(hsh)
        self.merge!(hsh) do |key, oldval, newval|
          oldval.is_a?(Hash) ? 
            oldval.recursive_merge!(newval) :
            newval
        end
      end
    end
    

    Code for returning

    class Object
      def returning( x )
        yield x
        x
      end
    end
    

    Monday, July 14, 2008

    HaveBetterXpath

    I'm rspeccing some REST controllers which return XML, and wanting to use XPath to validate the responses.

    I came across this

    http://blog.wolfman.com/articles/2008/01/02/xpath-matchers-for-rspec

    Thanks to him. It worked nicely (couldn't be bothered messing about with hpricot to get that to go), but I didn't like the API as much as I could have.

    Example of that API:

    response.body.should have_xpath('/root/node1')
    response.body.should match_xpath('/root/node1', "expected_value" )
    response.body.should have_nodes('/root/node1/child', 3 )
    

    I didn't like the fact that there were 3 distinct matchers, and that match_xpath didn't work with regexes. I re-worked it, so the API is now

    response.body.should have_xpath('/root/node1')
    response.body.should have_xpath('/root/node1').with("expected_value") # can also pass a regex
    response.body.should have(3).elements('/root/node1/child') # Note actually extends string class and uses normal rspec have matcher
    

    Extending the String class to support elements(xpath) is a win also because it lets you do things like

    
    response.body.elements('/child').each { |e| more complex assert for e here }
    

    Without further ado, new code here:

    
    # Code borrowed from
    # http://blog.wolfman.com/articles/2008/01/02/xpath-matchers-for-rspec
    # Modified to use one matcher and tweak syntax
    
    require 'rexml/document'
    require 'rexml/element'
    
    module Spec
      module Matchers
    
        # check if the xpath exists one or more times
        class HaveXpath
          def initialize(xpath)
            @xpath = xpath
          end
    
          def matches?(response)
            @response = response
            doc = response.is_a?(REXML::Document) ? response : REXML::Document.new(@response)
            
            if @expected_value.nil?
              not REXML::XPath.match(doc, @xpath).empty?
            else # check each possible match for the right value
              REXML::XPath.each(doc, @xpath) do |e|
                @actual_value = e.is_a?(REXML::Element) ? 
                  e.text : 
                  e.to_s # handle REXML::Attribute and anything else
      
                if @expected_value.kind_of?(Regexp) && @actual_value =~ @expected_value
                  return true
                elsif @actual_value == @expected_value.to_s
                  return true
                end
              end
              
              false # our loop didn't hit anything, mustn't be there
            end
          end
          
          def with_value( val )
            @expected_value = val
            self
          end
          alias :with :with_value
    
          def failure_message
            if @expected_value.nil?
              "Did not find expected xpath #{@xpath}"
            else
              "The xpath #{@xpath} did not have the value '#{@expected_value}'\nIt was '#{@actual_value}'"
            end
          end
    
          def negative_failure_message
            if @expected_value.nil?
              "Found unexpected xpath #{@xpath}"
            else
              "Found unexpected xpath #{@xpath} matching value #{@expected_value}"
            end
          end
    
          def description
            "match the xpath expression #{@xpath}, optionally matching it's value"
          end
        end
    
        def have_xpath(xpath)
          HaveXpath.new(xpath)
        end
        
        # Utility function, so we can do this: 
        # response.body.should have(3).elements('/images/')
        class ::String
          def elements(xpath)
            REXML::XPath.match( REXML::Document.new(self), xpath)
          end
          alias :element :elements
        end
    
      end
    end
    
    

    Monday, July 07, 2008

    How to: load the session from a query string instead of a cookie

    We use SWFUpload to upload some images in a login-restricted part of the site.

    There is a problem however, in that we weren't able to get SWFUpload to send the normal browser cookie along with it's HTTP file uploads, so the server couldn't tell which user was logged in.

    The 'normal' solution to this is to add the session key to the query string, and have the server load the session from the query string if the cookie isn't present, only ruby/rails doesn't support doing that.

    a nice guy with the handle 'mcr' in #rubyonrails on irc.freenode.org worked out how to make this work, by patching ruby's cgi/session.rb

    Instructions

    1. Copy cgi/session.rb out of your ruby standard library into your rails app's lib folder
    2. explicitly load the file out of lib, which will then overwrite the built in code

    Needless to say this will stop working if the ruby standard library version of cgi/session changes, but I don't see that as being very likely

    Patch in unified diff format:

    
    --- /usr/lib/ruby/1.8/cgi/session.rb 2006-07-30 10:06:50.000000000 -0400
    +++ lib/cgi/session.rb 2008-07-07 21:07:12.000000000 -0400
    @@ -25,6 +25,9 @@
     
     require 'cgi'
     require 'tmpdir'
    +require 'tempfile'
    +require 'stringio'
    +require 'strscan'
     
     class CGI
     
    @@ -243,6 +246,20 @@
         #       undef_method :fieldset
         #   end
         #
    +    def query_string_as_params(query_string)
    +      return {} if query_string.blank?
    +      
    +      pairs = query_string.split('&').collect do |chunk|
    + next if chunk.empty?
    + key, value = chunk.split('=', 2)
    + next if key.empty?
    + value = value.nil? ? nil : CGI.unescape(value)
    + [ CGI.unescape(key), value ]
    +      end.compact
    +
    +      ActionController::UrlEncodedPairParser.new(pairs).result
    +    end
    +
         def initialize(request, option={})
           @new_session = false
           session_key = option['session_key'] || '_session_id'
    @@ -253,6 +270,7 @@
      end
           end
           unless session_id
    + #debugger XXX
      if request.key?(session_key)
        session_id = request[session_key]
        session_id = session_id.read if session_id.respond_to?(:read)
    @@ -260,6 +278,12 @@
      unless session_id
        session_id, = request.cookies[session_key]
      end
    +
    + unless session_id
    +   params = query_string_as_params(request.query_string)
    +   session_id = params[session_key]
    + end
    +
      unless session_id
        unless option.fetch('new_session', true)
          raise ArgumentError, "session_key `%s' should be supplied"%session_key
    
    
    

    Sunday, July 06, 2008

    How to: Avoid getting your database wiped when migrating to rails 2.1

    We recently migrated some projects from rails 1.2 to 2.1.

    In doing this, we encountered a bug where sometimes (in production only) running rake db:migrate goes wrong, and re-runs all your migrations

    The unhappy side effect of it re-running ALL the migrations, is that it effectively re-creates your entire database, and you lose all your data. USEFUL

    I didn't have the time or the luxury to figure out quite why this was happening, if anyone does, please comment and let me know what it was. Apparently there's been a few other blogs mentioning it, but I don't have any of them at hand.

    The workaround is to manually create the schema_migrations table before you run rake db:migrate in rails 2.1.

    If you put the following script in your RAILS_ROOT/db directory, and run it, it will do that.

    Enjoy. (Disclaimer: if there's a bug in the script, and it does anything awful, it's not my fault! You have been warned!)

    require File.dirname(__FILE__) + '/../config/environment'
    
    # Define some models
    class SchemaInfo < ActiveRecord::Base
      set_table_name 'schema_info'
    end
    class SchemaMigration < ActiveRecord::Base; end
    
    # Create the schema_migrations table
    ActiveRecord::Migration.class_eval do
      create_table 'schema_migrations', :id => false do |t|
        t.column :version, :string, :null => false
      end
    end
    
    # Work out the migrated version and populate the migrations table
    
    v = SchemaInfo.find(:first).version.to_i
    puts "Current schema version is #{v}"
    raise "Version number doesn't seem right!" if v == 0
    
    1.upto(v) do |i|
     SchemaMigration.create!( :version => i )
     puts "Added entry for migration #{i}"
    end
    
    # Drop the schema info table, as rails-2.1 won't automatically do it thanks to our hacking
    ActiveRecord::Migration.class_eval do
      drop_table 'schema_info'
    end
    

    How To: Create old rails apps when you have newer gems installed

    My dev server has the gems for rails 1.2.6, 2.0.2 and 2.1.0 all installed.

    You can see which ones you have by running

    gem list --local | grep rails

    The problem is, when I create new rails apps, it always uses the latest version. If I explicitly want to create a 1.2.6 or 2.0.2 app, then I can do it like this

    rails _1.2.6_ some_old_app

    Useful.

    For the technically nosey, we can see how this works by reading the source of /usr/bin/rails, which is here

    require 'rubygems'
    version = "> 0"
    if ARGV.first =~ /^_(.*)_$/ and Gem::Version.correct? $1 then
      version = $1
      ARGV.shift
    end
    gem 'rails', version
    load 'rails'
    

    How to: Rails 2.0 and 2.1 resources with semicolons

    Rails 1.X used semicolons as method seperators for resources, so you'd get

    http://somesite/things/1;edit

    Rails 2.X switches this to

    http://somesite/things/1/edit

    This is nice and all, but some of us have actual client applications which we can't all just upgrade instantly

    To make the semicolon-routes still work in rails 2.X, so you don't break all your clients, do this

    At the TOP of routes.rb, before the ActionController::Routing::Routes.draw block

    # Backwards compatibility with old ; delimited routes
    ActionController::Routing::SEPARATORS.concat %w( ; , )

    and, at the BOTTOM of routes.rb BEFORE the end

    # Backwards compatibility with old ; delimited routes
    map.connect ":controller;:action"
    map.connect ":controller/:id;:action"

    Profit!

    Monday, June 30, 2008

    Failfox 3

    Firefox 3 is great

    BUT. I like to bookmark things by dragging from the URL bar to (a folder in) the bookmarks toolbar.

    Look what happens in FF3.

    You can't drag a bookmark onto a tooltip, so the whole thing fails.

    (@*#^$)(*&@#)($*&@#(*$&@(#*$#@

    Thursday, January 10, 2008

    Paragon NTFS for mac update

    In the comments of my last blog about the quick hack benchmark I did of paragon NTFS for mac OS X, Anatoly, the product manager from paragon replied. I'm reposting it here so it's not hidden behind that tiny little '1 comments' link at the bottom of the post.
    Dear Orion,

My name is Anatoly.
    I am Product Manager for Paragon NTFS for Mac OS X driver.



    Thank you for your time and efforts to measure the performance of Paragon NTFS for Mac OS X driver.



    Frankly speaking your results are not exactly correct for real time usage of the driver.
    First of all, the Finder application handles files (copy, create,...) using 2MB block size rather than 512B you tested (the "dd if=//tmp/bigfile of=/dev/null" command uses 512KB block size by default).
    Second, to get precise figures you have to unmount/mount partitions every time you perform any test (the reason you got - 87.45MB/Sec). 



    So, we retested our driver and would like to show you our results.
    We used commands that are similar to yours:



    For write:
dd if=/dev/random of=/Volumes/bigfile bs=2m count=100
    
For read:
dd if=/Volumes/bigfile of=/dev/null bs=2m



    HFS+ Firewire: 
Write (MiB/sec) - 4,26; 
Read (MiB/sec) - 36,06.
    

NTFS Firewire: 
Write (MiB/sec) - 4,24; 
Read (MiB/sec) - 35,26.

    


Please note in case we will use "bs=1m" we get:


    HFS+ Firewire: 
Write (MiB/sec) - 4,34; 
Read (MiB/sec) - 39,29. 


    NTFS Firewire: 
Write (MiB/sec) - 4,30; 
Read (MiB/sec) - 42,25. 



    According to our tests we can assert that our driver has the same performance as the native HFS+ driver has.
    

Let me know if I am wrong.



    Thank you,
Anatoly.
    Well, Wow. I always feel special when important people from companies reply to me! If anyone is looking for numbers, use those ones, as he obviously is far more clued up about it than I am.

    I completely agree with his assertion that the driver performs as well as native HFS+

    At any rate, I'd already purchased the product, and it's been great. If you are like me and need to access NTFS drives from your mac, you really should buy it.

    Thanks!

    Sunday, November 25, 2007

    5 minute performance picture: Paragon NTFS for Mac OS X

    I have a macbook pro, and a large amount of files, and I like to play computer games.
    So, I have a large external firewire/USB2 hard drive, and boot camp.
    This also means I care about NTFS access from OSX. 
    I'd been running MacFuse + NTFS3G. The performance was not toooo bad, but it was chock full of bugs. Drives would show up as network drives, and be called "-n External" and "-n" instead of "External". Not to mention that sometimes stuff would just randomly break. Files would sometimes disappear or move around in the finder and sometimes I just simply couldn't mount the drive. It sucked pretty hard, so I ended up booting up vmware and accessing that drive via vmware's USB2 mapping + samba under the windows VM. Not cool
    Suffice to say I was very happy when I saw the release of Paragon NTFS for OSX
    I downloaded it, got rid of MacFUSE and NTFS3G, and ran some benchmarks.
    Before the benchmark results, let me first say that even if it was just as slow as MacFUSE/NTFS3g, Paragon NTFS would still be worth a look, because it seems (so far) to be rock solid. Drives show up as proper drives in the finder. The volume labels are fine, as is everything else I can see. There is no lag, and I even now have the option of backing up my boot camp partition with Time Machine. Basically it's as if Apple had actually bothered to implement full NTFS support in leopard. That's cool.
    Anyway, Benchmarks:
    To get the write speeds, I did this:
    dd if=/dev/random of=//tmp/bigfile bs=1m count=200
    For the read speeds, I did this:
    dd if=//tmp/bigfile of=/dev/null
    Yes I am aware this is a crap method of benchmarking drives/filesystems. I'm not anandtech and I don't have days to do this.
    Computer: MacBook Pro 2.2ghz (the cheapest one)
    External NTFS drive: 7200RPM 500gig seagate with 16 meg of cache
    External HFS+ drive: 7200RPM 160gig seagate with 8 meg of cache
    Both use the identical dirt cheap firewire/USB2 enclosures I found at the local PC shop
      WRITE (Bytes/Sec) WRITE (MB/Sec) Read (Bytes/Sec) Read (MB/Sec)
    HFS+ Firewire 6049969 5.77 91697378 87.45
    NTFS Firewire 6645725 6.34 19899810 18.98
    HFS+ Local 6565372 6.26 90154137 85.98
    NTFS Local 6495106 6.19 16776180 16.00
    Conclusions:
    HFS+ is obviously doing some kind of caching on those reads, as there's no way you can get 85+MB/sec off a plain old 7200rpm drive, let alone the 5400rpm Local drive in the macbookpro. For Actual Use, I can't tell the difference between the NTFS and HFS+ drives
    Also, the read/write speeds suck compared to the 30/25 odd MB/sec windows reports when reading/writing files to the disk. But windows lets you enable write caching for removable drives. Maybe OSX doesn't do this. I don't know.
    Apart from that, it keeps up with HFS+ and in some cases beats it.
    That's Not Half Bad. I might send some my hard-earned paragon's way.

    Tuesday, October 30, 2007

    How to manually send an email using Rails' ExceptionNotifier Plugin

    We have a situation in our rails app where we want to catch an exception and display a custom error message to the user, BUT we still want the exception notifier to fire, so we know all the detailed backtrace data etc, and can deal with it if it's a problem on our end.



    Without Further ado, here is the code.

    begin
    
        # b0rk b0rk b0rk
    
    rescue => exception
        fake_params = { :id=>some_id, :etc=>'etc' }
        fake_request = ActionController::AbstractRequest.new
        fake_request.instance_eval do
            @env = { 'HTTP_HOST'=>'fake_host' }
            @parameters = fake_params
        end
    
        ExceptionNotifier.deliver_exception_notification( exception, ActionController::Base.new, fake_request )
    

    Enjoy :-)



    Monday, June 04, 2007

    5 Things that I don't like about Ruby

    I can't remember the quote or source, but there's a pseudo programmer-interview question which goes something like this: "What's your favourite programming language?" "OK, what are 5 things that are wrong with it that other languages do better?" This is something I've thought about from time to time, and so I figure I'll give it a shot. Obviously ruby is my favourite programming language at the moment, mostly(at the moment) due to the map and inject functions :-)

    1. Green Threads are Useless!

    The ruby interpreter is co-operative - it can't context switch a thread unless that thread happens to call one of a number of ruby methods. This means that as soon as you hit a long-running C library function, your entire ruby process hangs. I encountered this situation, and tried then to ship it out to another process using DRb. This was even more useless, as when you do that, the parent process blocks and waits for the DRb worker process to return from it's remote function... which doesn't happen as the worker is blocking on your C library function :-( I ended up having to create a database table, insert 'jobs' in it, and have a seperate worker which polled the database once a second. STUPID.

    2. You can't yield from within a define_method, or write a proc which accepts a block

    It appears to be to do with the scoping of the block, but in ruby 1.8.X, this code doesn't work: class Foo define_method :bar do |f| yield f end end # This line raises "LocalJumpError: no block given", even though there obviously is a block Foo.new.bar(6){ |x| puts x } The other way to skin this cat is as follows, which also doesn't work :-( class Foo define_method :bar do |f, &block| block.call(f) end end # The "define_method :bar do |f, &block|" gives you # parse error, unexpected tAMPER, expecting '|' # :-( This means there is a certain class of cool dynamic method generating stuff you just can't do, due to stupid syntax issues. Boo :-(

    3. The standard library is missing a few things

    I vote for immediate inclusion of Rails' ActiveSupport sub-project into the rails standard library. I'm sure I won't be alone in thinking this.

    4. Some of the standard library ruby classes really suck.

    Time, I'm looking at you. Strike 1: Not being able to modify the timezone. Seriously, people need to deal with more than just 'local' and 'utc' timezones. Yes I know there are libraries, but they shouldn't need to exist. Timezones are not a new high-tech feature! Strike 2: The methods utc and getutc should be utc! and utc, in keeping with the rest of the language. This alone has caused several nasty and hard-to-spot bugs Strike 3: What the heck is up with the Time class vs the DateTime vs the Date class? This stuff should all be rolled into one and simplified. The Tempfile class is also notably annoying. Why doesn't it just subclass IO like any sane person would expect?

    5. The RDoc table of contents annoys me

    This is probably more "Firefox should have a 'search in current frame'" feature, but under http://ruby-doc.org/core/, have you ever had the page for say Array open, and wanted to jump to say it's hash method? I usually do this using firefox's find-as-you-type, but seriously, try doing just this in the rdoc generated pages with the 3 frames containing everymethodever open. Cry :-(

    Monday, April 23, 2007

    HOWTO: Create a GParted LiveUSB which actually works WITHOUT LINUX

    EDIT:

    Turns out there is a windows version of syslinux, to be found HERE.

    If I'd kept reading for about 2 more minutes I would have found that out and managed to avoid pretty much all of the timewasting I did last night. Sigh. At least the other people trying to make it work using loadlin indicates I can't have been the only one to get it wrong :-(

    Also, the graphics card thing is a non-problem. Just chose Mini X-vesa in the gparted boot menus and it's fine

    Moral of the story? Just because you've found a solution doesn't mean it's the best one. Keep looking until you can be sure it is!


    So, I wanted to repartition my hard drive tonight. I've used GParted before and it was brilliant, so off I went to download the liveCD again.

    Once at that site, I saw the LiveUSB option from the left-hand menu, and thought "Brilliant, I don't have to waste a CDR and it will be much quicker anyway!"... Little did I know that PAIN and DESPAIR awaited me. I'll publish how I resolved this in the hope that less other people will have to.

    Step 1: Download the GParted LiveUSB distro

    I clicked 'Downloads', from the navigation, followed the liveUSB links, and wound up here:
    http://sourceforge.net/project/showfiles.php?group_id=115843&package_id=195292
    I downloaded gparted-liveusb-0.3.1-1.zip, and unzipped it. Hooray, now what?

    Problem 1: The GParted LiveUSB documentation is crap!

    The GParted LiveUSB information here says firstly I need to download a shell script, then I run it and copy some files to my USB key... Apart from a link to one forum post here that's it. Documentation? Instructions? Why do we need those? What could POSSIBLY go wrong?

    Problem 2: Running shell scripts on windows doesn't work too well

    The above shellscript invokes syslinux, and just about everything else on the net that talks about creating bootable floppies/USB keys also sooner or later invokes syslinux also. This seems to set up the boot record on the USB key so that you can boot linux off it. DOS used to have a utility like this called 'system' or 'sys' or somesuch but I can't remember. Seems simple enough, except I NEED LINUX TO RUN IT. Actually no I don't... see above. oops

    In my humble opinion, if I was running linux already, I wouldn't need the liveUSB, I'd just apt-get install gparted and run the damn thing. Yes some travelling sysadmins might have a linux box at home and also need a usb key to take around, but I'm not one of them. The entire reason I'm trying to get this liveUSB to run is because I DON'T have linux.

    So, I read that forum post, and noticed at the bottom someone using loadlin to load linux from a DOS system. Aha!

    Step 2: A whole crapload of google searching and researching...

    As I can't make my USB key linux-bootable without linux, I need to make it DOS-bootable, then get loadlin to load the linux kernel that comes with the gparted liveUSB. I'm going to skip all the boring details as it took me frickin ages and just explain what to do...

    Step 2.1: Download a DOS bootdisk so we have DOS

    Goto http://www.bootdisk.com/bootdisk.htm and download the "Windows 98 SE Custom, No Ramdrive" boot disk. This gets you an executable which expects to write to your floppy drive... except I don't have a floppy drive. BAH.

    Step 2.2: Extract the DOS bootdisk image with WinImage

  • Goto http://www.winimage.com/download.htm. I went for "winima80.zip" as I just wanted to run it once without the installer guff.
  • Run winimage. Do File->Open, and point it at the boot98sc.exe file you downloaded in step 1.
  • Once this is open, chose Image->Extract, and dump all the DOS system files somewhere
  • Step 2.3: Make your thumbdrive bootable

  • Goto http://h18000.www1.hp.com/support/files/serveroptions/us/download/20306.html and download the HP Drive Key format utility. As far as I can tell this is the easiest way to make your USB key bootable. It works with pretty much everything not just HP keys.
  • Make sure your USB key is plugged in
  • Run the HP program, and format your USB key using FAT (FAT32 should work too, but I didn't try it). Make sure to select "Create a DOS startup disk", and in the "using DOS system files located at:" box, enter the directory you dumped the DOS system files from winImage earlier
  • Hit start, and wait for it to finish. JUST IN CASE YOU FORGOT, THIS WILL ERASE ALL THE FILES ON YOUR USB KEY, SO BACK THEM UP FIRST, K
  • Step 2.4: Get loadlin

  • Goto http://distro.ibiblio.org/pub/linux/distributions/startcom/DL-3.0.0/os/i386/dosutils/ and download "loadlin.exe" to somewhere on your PC
  • Step 2.5: Copy files onto your USB key

  • Unzip "gparted-liveusb-0.3.1-1.zip" if you haven't already, and copy all the files into the root of your USB key. Your USB key should now contain those files, COMMAND.COM, IO.SYS, MSDOS.SYS and nothing else. No directories etc.
  • Also copy loadlin.exe into the root of your USB key
  • Step 2.6: Make loadlin run automatically

    Note: This is like in the forum post here, except it actually works. I think that's out of date.
  • In the root of your USB key, create a new file called "loadlin.par"
  • Open it with notepad or something, and put this in it: linux noapic initrd=initrd.gz root=/dev/ram0 init=/linuxrc ramdisk_size=65000 (for those interested, those are the kernel boot parameters which I stole that out of syslinux.cfg from the gparted liveUSB distro. If that file changes, so should your loadlin parameters)
  • In the root of your USB key, create a new file called "autoexec.bat"
  • Open it with notepad or something, and put this in it: loadlin.exe @loadlin.par
  • Step 3: GO GO GO

    Reboot your computer! If you've set up your BIOS properly to boot off USB keys, your computer should now boot the GParted liveUSB. HOORAYZ!!!!1111

    Step 4: cry

    That's as far as I got, because the version of X.org on the liveUSB doesn't seem to like my NVidia 7600GT, so I'm stuck with a command prompt. Those of you with other graphics cards however should be fine. Whether the liveDistro includes command line partitioning tools I dunno, I might go look at that now.

    If anyone would like to copy/distribute these instructions, or edit copies/etc, you are free to, as I am putting this particular blog post in the public domain under the creative commons public domain license.

    Sunday, March 04, 2007

    Rails 1.2 changes

    Formats and respond_to

    In earlier versions of rails you could have your actions behave differently depending on what content type the web browser was expecting – eg:

    respond_to do |format|

        format.xml{ render :xml=>image.to_xml }

        format.jpg{ self.image }

    end

    However to make this work you needed to set the HTTP Accept header in the HTTP web request. This is hard to do outside of tests. A new default route has now been added

    map.connect ':controller/:action/:id.:format'

    The additional format parameter lets you override the format so you can now load people/12345.xml or image/12345.jpg in your web browser to test what happens instead of mucking about with HTTP headers.

    Note you still have to register MIME types for the formats you need – for format.jpg I had to put

    Mime::Type.register 'image/jpeg', :jpg

    In my environment.rb, as jpg is not noticed by default

    Named Routes

    map.index '/', :controller=>'home', :action=>'index'

    map.home '/:action/:id', :controller=>'home'

    These create a bunch of helper methods which you can use anywhere you'd supply a URL or parameters for a redirect – eg:

    def first_action

        redirect_to index_url # redirects to /

    end



    def second_action

        redirect_to home_url( :action=>'second' ) # redirects to /second

        # which is the 'home' controller.

    end





    <%= link_to 'home', index_url %>

    <%= link_to 'test', home_path( :action=>'test' ) %>

     

    The difference between foo_url and foo_path is that foo_url gives the entire url eg: http://www.site.com/people/12345 whereas foo_path just gives /people/12345

    Gives your code lots more meaning and makes it shorter. Definite win for commonly used things.

    Resources

    CRUD means Create, Read, Update, Delete.

    These map to the four HTTP methods – POST, GET, PUT, DELETE.

    HTTP methods let you have shortcuts, so instead of /people/create you can just do an HTTP POST to /people. Also /people/show/1 maps to GET /people/1, etc etc

    Routes are created differently – for the above it is

    map.resources :people.

    Run script/generate scaffold_resource people to have a look

    NOTE: Rails expects resources in both the routes.rb and controller names to be named in plural - eg:

    www.example.com/people/1 instead of www.example.com/people/1

    Philosophy

    Basically they are encouraging you to write your controllers and app so that everything revolves around either a create, read, update, or delete of some resource.

    Contrived Example: User Login sessions:

    Old way – Revolves around action:

    User Logs in – post a form to /users/login – this sticks a 'Login Token' of some sort in the session to identify them.

    User does stuff – look up the session and link it back – might put an is_logged_in? method on your user model or something.

    User Logs out – posts a form to /users/logout – this removes thing from the session.

    New way – Revolves around resources

    Identify what the 'resource' is – in this case it's the Login Token.

    User Logs in – Create a LoginToken by POSTing a form to /LoginTokens – stick it's id in the session or something

    User does stuff – Find the correct LoginToken based on it's id, check it's valid etc.

    User Logs out – Delete the LoginToken by DELETEing /LoginTokens/1

    Conflict of interest?

    This LoginToken is behaving a lot like a model even though it's a controller. In fact you should create a model for it. The LoginTokensController should only be a lightweight wrapper around this model. This is a definite win if you can structure your app like this because it seperates the different areas of code out.

    In the old way we had the users controller handling login, logout, and whatever else it needed to do – probably about half a dozen other unrelated things. This gets you messy code which is hard to understand/modify. By moving each part out to its own controller we end up with several separate nice clean controllers instead of one big messy one – easier to maintain and to see who's responsible for what. Very important!

    REST API's

    Many blog entries talk about how you can get an externally accessible API 'for free' by extending these CRUD controllers. The standard example is something like:

    Now that your users are accessible via GET,POST,etc to /users/1, we can extend that controller using respond_to so that you can also query it for an XML or JSON representation of the user – this can then be used by other websites/apps for free! Hooray!

    This is nice in theory but not so nice in practice. Why?

    If you are in a webapp, doing a POST to create a new LoginSession will result in a redirect_to home_url or something like that. However for an external API, you're meant to return an http response code of 201 – Created to indicate the create was successful. Trying to jam these 2 things into the same controller is a mess.

    This does not mean the REST idea is bad, only that you need to think a bit more.

    If you've done the right thing and created a LoginSession model, then you can just create 2 lightweight controllers – one which fits into your web app, and another if you like which processes XML/JSON.

    You still get the major benefit which is that by thinking of stuff as resources, you get a much better design/structure of your app.

    ActiveResource

    If you have an external XML REST api, these resources end up looking a lot like some data that you might want to load, update, store, etc, like a kind of remote database.

    They therefore decided to make something called ActiveResource which would do for XML REST resources what ActiveRecord does for databases (in a limited fashion)

    For Example:

    class RemotePerson < ActiveResource::Base

        set_site http://localhost/rest_people ## the base URI

        set_credentials :username => "some_user", :password => "abcdefg" ## for HTTP basic authentication

        set_object_name 'person' ## the other end will expect data called 'person' not 'RemotePerson'

    end

    You can then do

    RemotePerson.find( 1 )

    This will fire an HTTP request at http://localhost/rest_people/1. It will load the resulting XML and convert it into an object. You can change its data, and call save, etc like you would with a piece of data from the database. When you have 2 sites that need to communicate with each other, this makes it a WHOLE lot easier

    The bad news – This isn't in rails 1.2 They pulled it out in one of the beta versions and there's no indication as to when it's coming back

    The good news – I wrote one (a limited version thereof anyway) to replace what didn't ship with rails.

    to_xml, from_xml, to_json, etc

    For all these active resource things to work, they need an easy way to convert data to and from XML so it can be sent over the HTTP request. There are now new methods - to_xml, from_xml, to_json, and other stuff like that which will convert the object to and from xml. These have been added to Hash, ActiveRecord, and other things like that.

    Multibyte Support

    Rails 1.2 hacks the string class so it is now sort of Unicode (UTF-8) aware.

    TextHelper#truncate/excerpt and String#at/from/to/first/last will now automatically work with Unicode strings, but for everything else you need to call .chars or you will break the string.

    In other words, If you need to deal with foreign characters (and for the US we probably will) String.length is broken and so is string[x..y] or just about anything else you'd want to do! Beware!

    I have no idea how this is going to impact storing strings in mysql etc.

    Tons of other bits and pieces

    Lots of things have now been deprecated doing stuff like referencing @params instead of params etc etc – these all get dumped to the logs as deprecation warnings – they are still ok now but will break in rails 2.0

    image_tag without a file extension is also deprecated. You should always specify one.

     

    See here:

    http://www.railtie.net/articles/2006/09/05/rush-to-rails-1-2-adds-tons-of-new-features

    Friday, November 10, 2006

    Why everyone wants to get rid of the parentheses in lisp

    This post is pretty much a response to http://eli.thegreenplace.net/2006/11/10/the-parentheses-of-lisp/ It's a good post - if you haven't read it, eli seems to say that: He's noticed a lot of people trying to use whitespace to remove most of the parens in lisp, and can't understand why. His opinion (as it seemed to me) was that removing them would be counterproductive because the parens (and their uniform syntax), which is what makes lisp so much better than everything else).

    My 2c:

    The case FOR s-expressions

    * Uniform syntax is theoretically very appealing (from a purist point of view)

    * It lets you write macros. Macros are incredibly powerful and basically awesome. I have macro-envy in most languages most of the time.

    The case against s-expressions

    * s-expressions are not at all like how I think about things (or how anyone else who is not a die-hard lisper thinks about things), because the lack of syntax is completely at odds with natural language.

    To elaborate on that last point: Because english is my native language, that's how I think. A fair amount of the time, code which I consider "good" ends up looking (and structured) like a shorthand version of english, because that's what I find is clear and understandable.

    I can write Ruby, C++ and C# and most other blub languages in such a way that maps relatively closely with how I think. In short, they 'fit my brain'. I also realise that over time my brain has adjusted to fit them as well, so I am aware that this kind of thing is probably always going to be biased towards the incumbents.

    Conclusion

    I believe that with the exception of a few idealists who hold the 'uniform syntax for purity's sake' argument above all else, pretty much everyone else is in it for the macros. To get them, I see two paths -

    1) Change yourself: Do enough lisp programming, for a long enough time, and put in the effort to make your thought process (at least with respect to programming) match with s-expressions. This seems to be what all the 'hardcore' lispers have done, and the evidence seems to point towards this having a huge and amazing payoff. I'm nowhere near this goal, but this is eventually what I'm aiming for with my casual lisp programming... It is however, (like any learning) long and dare I say, hard.

    2) Try and change lisp: Try and make the syntax fit better with your brain, so you (and all the other regular programmers) don't have to put in the hard yards adjusting quite so much. This is what I believe the lisp-whitespace people are aiming for.

    I do find the whitespace-lisp easier to read/understand than normal lisp, and it doesn't seem to sacrifice any functionality (macros still work), so it could be a winner.

    Then again, significant whitespace brings in a whole host of other problems, and makes it not so "pure", so perhaps not.

    Is it a good idea? In the end, I vote for a definite "maybe" :-)

    Tuesday, October 17, 2006

    Programming best practices

    I was going to call this "X secrets of highly effective developers", like some other people, only these things shouldn't be secrets. Note this is as always my not-so-humble opinion, so it is entirely likely that this article is either a) misguided, b) missing things, or c) entirely wrong, but I can't give you anyone else's opinion now can I.
    These are all typical cliché's, but I'd like to try explain them just as a brain dump anyway.

    1. Code for people, not computers.

    This is really the absolute number one goal. Everything else can be taken as a corollary to this.

    If you don't code for people, you are writing un-maintainable code. However, it's easy to throw the term around like a lot of other slogans/buzzwords without actually having a solid understanding. What this means to me is in fact "Try and write your programs as if they were plain english"

    Well, you know, not quite english, because english has it's own giant set of problems too, but the point I'm trying to make is that someone else should be able to read your code and it should flow as if it has topics, headings, sentences, paragraphs, and so on. You should read it like a book, not decipher it like a code. In fact, code is a crap word, we should call it something like "instructions", instead of code.

    How do we do this? With years of experience, and a constant drive to learn and apply new things.
    But for now, here's a couple of pointers which I find have helped me so much that I feel like slapping other developers who don't do them:

    2. Use good names

    This is obviously a subset of #1, but if I had to pick the most important thing, this would be it. Again, "Use good names" is a meaningless phrase, what you should do is "call things what they are". Whenever you have a variable/function/class/whatever, ask yourself "what actually is it? a buffer? a file handle? a person? what?", and then call the variable that. Simple, but mostly always overlooked by most programmers. Other developers should almost always be able to look at a variable/function/class and make a correct guess as to what it is, and what it might do otherwise you probably have a bad name.

    This is doubly useful because sometimes you will have trouble expressing what a something actually is. Sometimes, this is just not being able to think of the right word (go learn english :-D ), but more often than not, it is a strong signal that your design isn't correct, or you have accidentally gone down the twisty path towards a tangled mess of garbage.
    For example, if you can't think of a single good name for a class or function, it's probably because it's doing more than one job, and should be broken up into 2 classes/functions.

    At the end of the day, if you can't even think of a reasonable name for a thing which makes your program work, then how will anyone else (usually you, 6 months later) ever hope to understand it?

    3. Use abstraction

    This can be "bottom up" programming, or "top down" or whatever design methodology is in fashion at the time, but the important thing is that you build code On top of other code, not alongside it. You are creating a pyramid, not tiling a floor.
    That may not have made too much sense - to try explain it a bit better, think of writing a program that opens a file, writes some string to it, and closes the file.

    If you were to use the all-too-common floor tiling method, you'd have a big long function (or lots of small functions running in sequence, whatever), which would do the following in sequence:
    Allocate a file handle -> call the API open function -> allocate the memory for the string -> keep track of how many bytes we have -> write the memory to the file using the API file writing function -> close the file -> free the string memory.
    All the operations are on the same "level", the stuff is just happening in a big row, like laying down tiles next to each other..

    If we are to write the above using abstractions, we'd instead have a file class, and a string class. The file class would deal with the file API (handles and stuff), and the string class would deal with the string memory. Then, instead of our program allocating handles and memory, it can just deal with files and string classes. It would do the following.
    Create a file object -> Create a string object -> call file.write( string ) -> cleanup done automatically by objects.

    When you write programs correctly with abstraction, you can stack the abstractions on top of each other, leading eventually to code that is almost like pseudocode or a domain-specific-language.
    This is what object oriented programming, and a lot of other techniques are actually for (as opposed to the common retarded view of inexperienced programmers, that OO isn't being done correctly unless all your classes use inheritance somehow)

    4. Do the simplest thing that can possibly work

    Now before I get branded as an agile zealot, not everything from agile is actually bad. What this actually means is Do the right thing, but do it the simplest and smallest way you can. Don't write code which doesn't directly help you get things done, or tries to solve problems you don't actually have.
    This also doesn't just apply to your higher-level design, but low level too. Functions/classes/interfaces/etc should all be as simple as possible, and do only what they need to.

    A classic example of doing the wrong thing here is building a big pile of classes and interfaces and message-handling code before you actually attack your main problem. Yes it's fun to build frameworks and architecture, but at the end of the day, it'll probably just get in the way.

    5. Don't repeat yourself, refactor instead.

    If you are like lots of developers I've seen, and believe the best way to start a new program/library/class is to find a similar one and copy/paste it into a new file, STOP NOW. BAD PROGRAMMER! SIT!

    If you find yourself writing the exact same code twice, refactor it into a common function or class.

    If you find yourself writing similar code twice, refactor the common bits into another function or class (generics, dynamic types or other ways of dodging verbose typing, and first-class functions are a big win here), and have the remaining different bits as small and clear as possible.

    5. Good design does not come from 'design,' but from refactoring.

    If you do all the other stuff, you'll probably find yourself left with a ton of small functions and classes, and all your other classes will be using them. This is already better than lots of giant functions which duplicate code and functionality, but can get a bit messy provided you don't clean them up.
    Most likely however, a bunch of your helper functions will all take similar parameters. These are prime candidates for making a new class. Remember also, not everything should to be a class. It's fine to have a bunch of global functions in a namespace (or a static class if you're stuck in C# or java), if that's the nicest way to think about those kinds of objects. The main goal is to always try and make sure that your helper/library functions are as simple, clean, and useful as possible, and are arranged in clear and obvious groups. You can even write unit tests for them :-)

    Sooner or later, you'll probably end up with either some kind of framework (to my knowledge, this is how Rails came about), or a set of general classes, like the .NET base class library, but for dealing with the kinds of problems your company faces, or perhaps both.
    This is excellent, as you'll be able to re-use these things over and over again in future, meaning next time you have to write that stupid app which has to read the registry and write to files, you'll be able to use your framework/library code and do it in 5 minutes instead of a day. Your boss will love you, and you won't mind programming in C++ anymore because you can actually get things done now.

    Also, because you'll have created this framework/library code based on other working code, and based on what you actually need to do, and refactored it to best fit your problems as you go, you'll have stuff which is actually useful and good.
    People are stupid, and 99% of us can't design our way out of a paper bag. Most frameworks that get 'designed' up front wind up completely missing the point and to solving the wrong problems in the wrong way. But, by keeping existing code simple, clean, non-repeating, and constantly refactoring it, we can end up with some well structured and maintainable code anyway. Owzat? :-)

    Sunday, October 15, 2006

    Diffing itunes music libraries with ironpython

    Intro:

    Ages ago when I first played with python, I found it was pretty cool, in a weird sort of way, and that I could probably love it once I had spent 6 months bending my brain around it's quirky bits (too many underscores, whitespace importance, inconsistent and strange libraries)... And then soon after, I found ruby, which had none of the above problems, so I didn't bother learning any more python...

    Until a few weeks ago, when IronPython was released.

    For the uninitiated, IronPython is python which runs on and inside the .NET CLR (or mono, just not as quickly). My biggest blocker for regular python was the built in libraries/docs. I'm bound to be flamed, but the ones I looked at (file access, networking, HTTP, etc) just were not intuitive. IronPython however is a dream, because it uses all the .NET libraries instead (or as well as, if you like). I'm pretty familiar with the .NET BCL, so this was great.

    Actual content :-)

    During my ~4 years at my current job, I have built up a large collection of music which I'd listen to. I also have a large collection of music at home. As I'm leaving in 3 weeks, and will lose all that data, I wanted to take a copy of the music from my work computer home.

    The problem with this is that I only have a 6 gig ipod mini to transfer the songs on, so I can't just copy them all. I needed to diff the 2 music libraries, and only copy the songs that I don't have at home already. iTunes exports a large hairy pile of crap XML file when you ask it to export it's library, so here I thought would be an opportunity to play about with some IronPython, and post it on the net in case it's useful to anyone else.

    Here it is, the comments are the documentation :-). Hopefully it's useful, if only as a quick demo of how things work in ironpython.

    # Import all the libraries we'll need import clr clr.AddReferenceByPartialName( "System.Xml" ) from System import * from System.IO import * from System.Xml import * # Create a helper function to convert the iTunes XML file into a hash so it's actually useful # An example of one of the hash entries: ret[ 'Disturbed: Prayer' ] = 'file://C:/path/disturbed_prayer.mp3' def fileToHash( fileName ): ret = {} doc = XmlDocument() doc.Load( fileName ) # The XPath is ugly... Export your itunes library and take a look to see why for elem in doc.DocumentElement.SelectNodes( "/plist/dict/dict/dict" ): # song name is always the first <string> song = elem.SelectSingleNode( "string[1]/text()").Value # artist is always the second <string> artist = elem.SelectSingleNode( "string[2]/text()").Value # Unfortunately the filename is not fixed in the structure, so we have to # find <key>Location</key> and then move to the next element after it path = elem.SelectSingleNode("key[text() = 'Location']").NextSibling.FirstChild.Value # Add it to our return-param ret[ "%s: %s" % ( artist, song ) ] = path return ret # Parse both files into seperate hashes homeSongs = fileToHash( "itunes library home.xml" ) workSongs = fileToHash( "itunes library work.xml" ) # Create a new dict containing only songs that are at work but NOT at home diffSongs = dict( [ (song,workSongs[song]) for song in workSongs if not homeSongs.ContainsKey(song) ] ) # Write them all to the output file # The format of the output file is: file://C:/path/disturbed_prayer.mp3 # Disturbed: Prayer # The reason I've done it like this with the path first is hopefully to make it easier # for another script to be able to use it as a list of filenames to copy... writer = StreamWriter( "diffSongs.txt" ) for str in [ "%s # %s" % (diffSongs[song], song) for song in diffSongs ]: writer.WriteLine( str )

    Disclaimer:
    1) This was meant to be quick to write, I didn't care about run performance or any other nifty tricks - it only takes 3 seconds to parse 2x 3.5 meg XML files, and that's good enough for me.
    2) Sorry about no syntax hilighting, I couldn't find any decent way to do it short of screenshotting my PSPad window.

    Wednesday, October 11, 2006

    Vista RC1 and RC2: Impressions

    Over the last couple of days I've installed both vista RC1 and RC2 at home, and today installed vista RC2 at work. Here's a quick brain dump of my thoughts about various things that have cropped up: Except for where I explicitly say so below, RC1 and RC2 are pretty much the same.

    Aero Glass (fancy graphics):

    My home machine is an Athlon 64 3200+, with 512 RAM, and a Radeon 9700 graphics card w/128Mb. The graphics card is the main thing that gets hit by aero, and in RC1, aero wasn't enabled by default after the install. A quick look at the control panel to turn it on, and it was fine. I really liked it. Sure, it's just eye-candy, but who said computers have to be ugly? All in all I was very happy....

    HOWEVER: RC2 decides that I need 1 gig of RAM to enable aero. I haven't been able to find any overrides as of yet. This is stupid. Someone at the Microsoft marketing department has gotten their nose into this or something, because I KNOW aero runs fine on this machine with 512 RAM, having just run it the day before. W T F.

    Minor gripe: The "flip 3d" thing they have is useless. It offers less usefulness than just the standard alt-tab. To add to the blogosphere whining, why didn't they just verbatim rip Expose from the mac?

    The Vista Basic Theme (low rent graphics):

    You could just download a theme from deviantart or elsewhere for windows XP, and you quite literally wouldn't be able to tell the difference. It's debatable as to if this theme is uglier than the default XP theme, it's certainly not much  better that's for sure.

    Performance (superfetch smart memory management and caching)

    This is a bit strange, but overall I was suitably impressed. Example: When I'd play Warcraft III on windows XP, quitting would bring a 30s to 2 minute "crunch", while everything got paged around the place. On vista, with identical hardware, it's much more responsive

    Think of it like this:
    In XP, when you load a large program, it gets loaded into ram, then 100% of your CPU/HDD resources are free for use. When you quit, or do some other memory intensive thing, the system takes a dump for a while to sort all itself out.
    In Vista, when you load a large program, it gets loaded, but only 95% of your CPU/HDD resources are free, the other 5% are used by superfetch tinkering away in the background - which means that when you quit (or other memory intensive thing), all the stuff it's done in the background pays off and your system just runs a bit slower instead of just falling over.

    Programs which you never run take about the same time to load, and basically everything performs the same. Frequently used stuff like firefox loads MUCH more quickly, even though...

    Memory usage (oh noes! those horrible background tasks!)

    From (my admittedly fuzzy) memory, a vanilla install of XP RTM would use about 90 meg of RAM. Post installing SP2, it would use about 180. After I did the customary beatdown of all the unnecessary services, it'd use about 120 megs.

    Vista seemed to use about 300 megs of ram post install. After the services beatdown, it got down to 200 meg (not counting disk cache of course). Aero adds about 50 meg to this. However, the system overall seemed just as responsive as XP did - This is basically a testament to how good superfetch is, but vista still whores teh RAM.  Hopefully it won't be quite so bad when they RTM it, but I'm not holding my breath.

    As an aside, this is probably why they don't let you run aero on a machine with 512 RAM - 300 for vista + 50 (at least) for aero doesn't leave much for applications. I can see the reasoning behind it but how about instead of disabling aero on low-ram systems, however there definitely needs to be an "I am not a retard" switch, so that I can use the free ram I got by turning off all the useless background crap to run aero. As I said earlier, I KNOW aero runs fine on this PC. WTF.

    Readyboost

    I didn't test this on RC1 because I didn't have the memory stick then, but on RC2 I am using an entire 1GB usb2 memory stick for Readyboost. I can't provide any actual numbers or anything, but from my very limited experience, it seems to make a noticeable difference in responsiveness. I'm very happy with it, and I'm usually picky about these kind of things.

    PS: Note I said "responsiveness", not "performance". Stuff still runs the same, just those "Crunch" moments like when you load firefox or alt tab out of a game or other "beat the crap out of the pagefile/disk" moments are a whole heap better.

    PPS: No, adding 1 gig of readyboost to a 512 MB system still doesn't let you run aero. bastards.

    Other neat things

    I had mp3's playing, and upgraded my graphics drivers without rebooting or even skipping a beat in the mp3s. It thrashed for a while, the screen went black for a second or 2, and presto new graphics drivers. Seriously impressed.

    Being able to type arbitrary commands, like "net stop server" into the search box in the start menu, and into the explorer address bar is awesome

    Windows explorer and media player now use the same format for album art as itunes, which is sweet.

    The new task manager/performance stuff is great.

    And the winner is!

    Overall, I'd have to say my favourite part of vista overall thus far would have to be the new windows explorer. I like the new clickable address bar format. The revamped 'documents and settings' thing is so much nicer. The customizable 'favourite links' panel is great. The searching and filtering is brilliant. The new start menu is insanely good.
    Oh and not to mention the facts that a) it doesn't hang when you try access network shares, and b) if you're copying/moving/deleting a bunch of files, and one of them fails, it carries on with the rest instead of just falling over like a useless pile of crap like XP and everything before it did. I could go on for hours. I love it. A million points to the shell team at MS.

    And the loser is!

    This is so cliché I know, but user account control sucks. I agree with the principle in theory, but it's implementation just seems to suck.

    On the one hand you have some of the things like when you copy a file to a "restricted" folder - you should get one popup asking for confirmation, but you also before that get a dialog warning you that if you continue you will be prompted for confirmation. They could quite literally rewrite it to the following text: "If you try and do this, we will annoy you with another dialog after this one, are you sure you want us to pop the second dialog so you can click yes and be annoyed"

    On the other hand you have things where it just doesn't kick in. If I use explorer to copy and paste a file to a "restricted" folder, it pops me for confirmation and succeeds. If however I drag/drop the file, it just fails with 'access denied' and no prompt or way to get around it. It seems to have no awareness of things other than explorer in some situations too - I can't save files from firefox to some places; doing things from the command prompt pretty much just doesn't work, etc.

    IMHO, it looks as if they haven't actually implemented UAC as part of the windows OS or API, they've just made administrators into restricted users, and made explorer dick around with permissions when it launches applications. My recommendation? Turn it off like everyone else, and wait another 5 years, maybe MS will get it right next time.

    Conclusion

    Vista overall is worth upgrading from XP. I don't know how much I'd pay for it, but it definitely is an overall improvement and there doesn't really seem to be many other downsides apart from the odd piece of software here and there. Almost everything runs just fine, and it's rock solid stable. Just remember to turn off UAC :-)