Ivan Porto Carrero

IO(thoughts) flatMap (_.propagandize)



Git/Github Survival Guide

Lately I’ve been helping a few people to get started on Github. I use git at the command line and my survival guide is also based on that way of interacting with Git. So I thought I’d write the procedure up so that I can just point people to this page.

The first tip I can give you and most of what I’ll be talking about is in the guides from github. When you’re used to Subversion or Team Foundation Server for example you’ll need to make a mental leap. That leap would be to realise that your local copy is the master copy for you. The remote server like github.com is one is a little bit like an afterthought. I think it goes a little bit like this: “O cool I’ve built this really cool thing here and I’ve got it in this git repository on my machine. It would be cool if other people also had access. Wait a minute, I’ll just add a remote and push my stuff onto that server.” Problem solved.

Most of this guide applies to both windows and *nix systems except for the next part because that will describe the install parameters for getting msysgit to behave nicely on your system.

==== Windows only ====

If you’re on windows I suggest you use msysgit as your git client. You can probably use everything I’m about to explain from explorer too if you want to use tortoisegit or just prefer gui’s. I, personally, like having options so I’ll probably use a mix of those in the future. Ok onto the install procedure.

Somewhere half-way through the install of msysgit it will ask you how far you want to integrate it. The correct choice is the middle one: Run Git from the Windows Command Prompt. For generating ssh keys etc you probably want to use OpenSSH.

==== Windows only end ====

When the install of msysgit is completed it is time to start configuring your git install for usage with github. The first step you need to take is to tell git your username and email address. You will also need your API token that you can find on your account page.

[~]$ git config --global user.name <span class="str">"Ivan Porto Carrero"</span>
[~]$ git config --global user.email ivan@nowhere.com
[~]$ git config --global github.user casualjim
[~]$ git config --global github.token [[API TOKEN]]

This information can be found in the github guides: http://github.com/guides/tell-git-your-user-name-and-email-address. The configuration above is global but you can still override that on a per project basis.

Now that git knows how to deal with github it is time to formally introduce your machine to the github server. To do so you might have to create an ssh key private/public keypair. A tip I can give you before you start the creation is that you probably don’t want to type a password everytime you push to github. So when you create your ssh key don’t use a passphrase (leave it blank when asked for one).

The procedure on how to create the ssh keys can againn be found in the github guides: http://github.com/guides/providing-your-ssh-key. On windows I would suggest that you use the openssh one. I use RSA keys but you can choose whichever flavor you want of course :) After generating the ssh keys you need to provide them to github in your account page.

This should get you up and running with github. I’m assuming most people coming to github are familiar with subversion so I’ll try to map some common operations to the command sequence you need in git.

First things first I have a couple of aliases defined for some common operations.

You can just copy paste the aiases section below in the .gitconfig file that you can find in the root of your personal folder. C:\users\ivan.gitconfig on vista for me and ~/.gitconfig in bash.

    ci = commit
    st = status
    co = checkout

If all you need is read-only access to a repository you can just clone a repository by its public clone url ie. git clone git://github.com/casualjim/ironrubymvc.git

The first operation you’ll need is how to get source code, make changes and send in a patch. In git lingo this is called forking.

On github you fork the project you want to make changes too. Then you clone that project on your local machine and make your changes. You then push your changes back to your repository and send a pull request to the original project. That is all you need to do to send in a patch, issue a pull request.

I forked ironrubymvc from Jimmy Schementi and send him pull requests regularly when I’ve completed a chunk of work on it. so here’s the sequence of commands I use to do this.

git clone git@github.com:casualjim/ironrubymvc.git

… make some changes …

To start updating the repository with my changes I’ll generally first ask for a status to see if I need to add some files to ignore and if there are new files that need te be included

git st

If there are files that need to be ignored I’ll add them to the .gitignore file in my project root. If there are still some new files that need to be added:

git add .

Then I’m ready to commit the changes to my local repository:

git ci –a –m “Made ironrubymvc do the dishes and ironing”

Now it’s time to push my changes to the github server.

git push

And now I need to go to the github website and send a pull request to Jimmy. He can then decide if he wants to apply the patch or make some changes.

The next step is to keep your fork in sync with the forked repository, so that you can continue to pick up their changes and ensure that your changes still work.

git remote add upstream git://github.com/jschementi/ironrubymvc.git

to automatically fetch and then merge the changes from the upstream repository you can pull from it. You have to tell pull the remote source it has to pull from and the target branch.

git pull upstream master

A more detailed explanation of this process can be found in the github guides. http://github.com/guides/fork-a-project-and-submit-your-modifications

The next thing we’re going to map is svn:externals. In git this is called submodules. They have a great explanation off that in the github guides http://github.com/guides/developing-with-submodules

Suppose you made some changes and they aren’t really what you want and you want to restore the repository to the last commit.

git reset –hard

The last topic is branching and merging changes etc. As an example I will take the IronRuby project for which Michael Letterle and myself maintain the linux branch. This branch ensures that IronRuby gets the fixes it needs to compile on mono. A typical workflow for me when I sync it with the source repository @ git://github.com/ironruby/ironruby.git from the ironruby project root on my local machine.

Previously I did:

git clone git@github.com:mletterle/ironruby.git

git remote add ironruby git://github.com/ironruby/ironruby.git

And to create and track the remote linux branch I issued the following commands:

git co –-track –b linux origin/linux

git pull

This has now got my local copy set up with a linux branch and has pulled in the contents of the remote branch to my local repository.

When there are changes in the source repository I issue the following commands:

git co master // Check out the master branch

git pull ironruby master // Pull in changes from remote

git co linux // Check out the linux branch

git merge master // Merge in the changes from the master branch

mate . // Open textmate to resolve conflicts

git add . // Add the files with the resolved conflicts back to the repository

git ci –a –m “Synced with upstream” // submit changes

git push // update the github server

The information above can be found in the github guides as well but in several places:






Those are the commands I use about 95% of the time when I’m working with git. I thought they might be useful to other people hence the share.

If you combine the above with my previous post on how to git-enable your command-line http://flanders.co.nz/2009/03/19/pimp-your-command-line-for-git/ . I guess you’ve got a pretty sweet setup.

There is one gotcha that I’d like to repeat one more time. When you’re branching you have to close the solution in visual studio or all kinds of nastiness will ensue. Visual studio will lock some files and if git wants to remove them it can’t. This results in a branch that is probably messed up.

kick it on DotNetKicks.com

Technorati Tags: git,github,cheat sheet



Pimp Your Command-line for Git

I do development on both the Mac and windows. I prefer to use git as my source control these days and have done so for the past year or so. Git is great, I love it. I love the ease of branching a lot too. I’ll often just branch of locally just to play around with an idea without affecting the master branch.

But having many branches can be confusing at times, especially in my case as I can only remember what I was doing for 5 seconds. So sometimes I mess up a perfectly good branch because of the confusion.

On windows I use Powershell as my command-line. I don’t know much of powershell scripting. To be honest I mainly started using it because then I wouldn’t constantly get an error when I typed ls instead of dir :) However since then I did explore that environment a little and it does give me easy access to the CLR and a way to create very very powerful batch files, although I do most of my scripting in Ruby these days. It also allows you to customize your prompt. You need to allow scripts unrestricted access for this to work. You can do that by entering Set-ExecutionPolicy Unrestricted at a powershell prompt. Then you close powershell and create a file in %MYDOCUMENTS%\WindowsPowershell called profile.ps1

<span class="kwrd">function</span> prompt


    $host.ui.rawui.WindowTitle = $(get-location)

    Write-Host (<span class="str">"+ "</span> + $(get-location)) -foregroundcolor Yellow

        $branches = <span class="str">""</span>

        git branch | <span class="kwrd">foreach</span> {

            <span class="kwrd">if</span>($_ <span class="preproc">-match</span> <span class="str">"^\*\s(.*)"</span>){

                $branches += $matches[1]



    <span class="kwrd">if</span>($branches){

        Write-Host (<span class="str">"("</span> + $branches + <span class="str">") "</span>) -nonewline -fore Cyan


    Write-Host (<span class="str">"»"</span>) -nonewline -foregroundcolor Green

    <span class="kwrd">return</span> <span class="str">" "</span>


The result of this prompt looks like this:

Picture 2

In bash I use a .bashrc script that shows me the branch in my prompt. You need ttycolors enabled to enjoy the full prompt but this is the section that takes care of my prompt.

<span class="kwrd">if</span> [ -n <span class="str">"$force_color_prompt"</span> ]; then

    <span class="kwrd">if</span> [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then

    <span class="rem"># We have color support; assume it's compliant with Ecma-48</span>

    <span class="rem"># (ISO/IEC-6429). (Lack of such support is extremely rare, and such</span>

    <span class="rem"># a case would tend to support setf rather than setaf.)</span>


    <span class="kwrd">else</span>




parse_git_branch() {

  git branch 2> /dev/null | sed -e <span class="str">'/^[^*]/d'</span> -e <span class="str">'s/* \(.*\)/(\1)/'</span>


<span class="rem"># export PS1='\e[0;32m+ \u @ \w\e[m\e[0;33m »\e[m '</span>

<span class="kwrd">if</span> [ <span class="str">"$color_prompt"</span> = yes ]; then

    PS1=<span class="str">"\[\033[01;36m\]+\u@\h\[\033[00m\]:\[\033[01;32m\]\w\[\033[00m\]\[\033[01;33m\]\n\$(parse_git_branch)»\[\033[00m\] "</span>

<span class="kwrd">else</span>

  PS1=<span class="str">"\u@\h:\w\$(parse_git_branch)\$ "</span>


The result of the bash script looks like this:

Picture 1

kick it on DotNetKicks.com

Technorati Tags: Git,Powershell,Bash



Ninject Knows a New Trick

Earlier this week Nate already said that I was doing some work on Ninject, now I have it working :). Everything I’m about to talk about is currently in the master tree of the ninject github repository. Getting IronRuby to play nice with Ninject was surprisingly easy :).

There was only one place that required some kind of weird workaround and from that workaround I’m entirely sure that it will go away by the time .NET 4.0 will be here. The DLR duplicates a number of delegates from .NET 4.0 but .NET 3.5 also defines them (i.e. System.Func<T, TT>) and then you get great exception messages like: System.Func is not of type System.Func. The solution is to not reference System.Core in your project. Except that Ninject expects the System.Core variant at some point and that was solved by aliasing the System.Core assembly and talking to the types in that assembly by their alias.

Anyway the juicy stuff :) How can you take advantage of Ninjects newly found friendship with IronRuby.

Ninject now has 2 flavors of Kernels. We have a StandardKernel that knows how to deal with the module configuration system that uses a fluent interface defined in C#. And now we also have a DlrKernel that extends the StandardKernel with a RubyModuleLoader plugin. If you tell the DlrKernel to look inside a path for configuration files it will scan those folders for .dll or .rb files. Those files should contain the configuration for the ninject bindings.

So to create a Kernel that is ruby enabled you would use the following code:

IKernel kernel = DlrKernel();

var samurai = kernel.Get<IWarrior>();

The above snippet could then for example load a configuration file that has been defined like this:

require File.dirname(__FILE__) + '/../Ninject.Tests.dll'
include Ninject::Tests::Fakes

to_configure_ninject do |ninject|
  ninject.bind IWeapon, :to => Sword
  ninject.bind IWarrior, :to => Samurai

The configuration above shows how most of a typical configuration would be defined by you the full configuration API at your disposal. All the options for the configuration can be specified in 2 ways. The first way is in a hash like syntax and the second way uses a more fluent syntax.

to_configure_ninject do |ninject|

  ninject.bind IServiceA, :to => ServiceA, :as => :singleton,
                          :meta => { :type => "superservice" },
                          :name => "aaaaa",
                          :with => {
                            :parameter => { :my_param => lambda { |context| "param_value" } },
                            :constructor_arguments => {:const_arg => 56 },
                            :property_values => {:property_name => 94 },
                          :on_activation => lambda { |obj| obj.do_some_work },
                          :on_deativated => lambda { |obj| obj.do_some_cleanup },
                          :when => lambda { |context| "a value" } # or
                          # :when => { :injected_into => ServiceB } or
                          # :when => { :target_has => AnAttribute } or
                          # :when => { :member_has => AnAttribute } or
                          # :when => { :class_has => AnAttribute }


to_configure_ninject do |ninject|

  ninject.bind IServiceA, :to => ServiceA, :as => :singleton do
    meta :type => "superservice"
    name "aaaaa"

    with :parameter => { :my_param => lambda { |context| "param_value" } }
    with :constructor_arguments => { :const_arg => 56 }
    with :property_values => { property_name => 94 }

    on_activation do |obj|

    on_deativation { |obj| obj.do_some_cleanup }

    condition do |context|

    # or

    condition :injected_into => SomeClass

    # or ...



Some of the nicer consequences of using Ruby as a configuration language is the syntax for open generics. The example below shows how to configure types with open generics.

require File.dirname(__FILE__) + '/../Ninject.Tests.dll'
include Ninject::Tests::Fakes
include Ninject::Tests::Integration::StandardKernelTests

# IGeneric is a generic interface and GenericService is a generic type
# we don't have to specify any special notation for open generics

to_configure_ninject do |ninject|
  ninject.bind IGeneric, :to => GenericService, :as => :transient
  ninject.bind IGeneric, :to => GenericService2

To specify a condition the syntax would look like this

require File.dirname(__FILE__) + '/../Ninject.Tests.dll'
include Ninject::Tests::Fakes

to_configure_ninject do |ninject|
  ninject.bind IWeapon, :to => Shuriken do
    condition do |request|
             ? false
             : request.target.member.reflected_type == Samurai.to_clr_type
  ninject.bind IWeapon, :to => Sword
  ninject.bind IWarrior, :to => Samurai

Well that’s all. I hope you like it. I will be looking into more ways to integrate DLR stuff into Ninject the most obvious is allowing you to inject dynamic types into static classes.



IronRuby MVC Progress

If you follow my tweets or the IronRuby mailinglist then you would know that I’ve been working on taking IronRuby ASP.NET MVC from the prototype stages to a more complete application. For me this has been a great experience getting familiar with the insides of ASP.NET MVC as well as playing around with integrating IronRuby in an existing C# application.

The guys over at MSFT (John Lam, Jimmy Schementi and Phil Haack) had previously created a prototype and I built upon their work. You can read more about the previous versions of the prototype.

In a previous post I explained what I had done I explained how I found an entry point and how to get started building your own mvc framework on top of asp.net MVC.


How far am I now since my last post? Well we’ve got action filters, result filters, exception filters and authorization filters. We have an IronRubyMvcApplication as a base HttpApplication. Which should get you pretty far when building apps with IronRuby MVC.

I’m currently looking at implementing selectors and aliased actions. When I’m done with that I guess I’ve got a fairly working implementation of asp.net MVC and I’ll start developing a sample with it.

I’ve actually started building the sample to find out if I’ve missed something. The sample will be using LightSpeed, IronRubyMVC as well as youtube and flickr.

I would love to hear from people that submit bugs or even patches. I’d also like to get some discussion going on what is going to happen to it in the future :)

kick it on DotNetKicks.com



Attending TechDays Belgium 2009

On march 11 and 12 this year there will be TechDays Belgium

I’m personally looking forward to the event because I will get to meet Laurent Bugnion in person.  I’ve been following him on twitter for over a year now and it would be cool to finally put a face (apart from the avatar) to the author :)

There are some really interesting sessions there. At least they are for me, mostly towards silverlight and WPF development.  I’m mostly doing WPF dev work at the moment. Silverlight is just something that interests me naturally. I started development by writing code in flash and animating 3D generated models. I turned away from flash because of the dev tools support and fully went into .NET after that.  I was stoked to learn about silverlight and I even like the tools it has to design interfaces with, but of course I’m not a designer so I minimize the time spent in there and hope for somebody to come along and make it look good.   I love javascript but css not that much and my velocity is higher in Silverlight to develop the complexer kind of UI so I’m also very interested in the Silverlight stuff.

Maybe it’s harder for me to get excited about the new language features of C# 4.0 as for me there aren’t that many new features to me although for the c# language the features are huge. Variance is something I’ve been waiting for since the introduction of generics, for dynamic typing I can get my fix with IronRuby of course. I rarely do COM interop but it will be nice if it gets better support for that. Named parameters however are new and pretty interesting to me. While it will be a hard sell for me to introduce IronRuby into client work. I can still do lots of tricks with the dynamic typing support in C#4 which is pretty cool too. I’ve got my ticket have you got yours?

If you’re going too let me know so we can catch up for a beer.

See you there :)




2 weeks ago I had the chance to talk to the Italian Alt.NET community about IronRuby. I’m pretty excited about the Ruby language and I try to convey that enthusiasm onto my victims. From the talks I had afterwards it looks like I was able to infect at least one or two enough to make them go home and download IronRuby to have a play. It is the very first time that I get to see one of my presentations myself because this one got taped and put online.

If there is one thing that watching this video has taught me then it will probably be that I need more practice and to prepare a lot better. Since I’m a kid I have the habit of walking into things hugely underprepared. I take the big bullet points of what I’m supposed to say and make a story around it when I start talking. It was my belief that those things feel more natural. After having watched this presentation I may have to come back on my point of view and probably prepare better. I don’t believe that learning everything you’re going to say by heart is a good solution either because if you then forget one thing you’re completely lost in your storyline and you may freeze.

I guess it would probably be a good idea for me to get a video camera and tape a few practice runs of presentations so that I can improve and look way more professional next time I get on a stage. That being said I really enjoy doing those things. The good thing about doing those sessions is that I get to talk to many interesting people about subjects close and dear to me.

Anyway you’re probably not waiting for me to completely dissect my performance so instead I’ll leave you with the link to the video



Created a Basic Integration for IronRuby and Asp.NET MVC

As I can see the end of the chapter on Rails and I’m looking ahead to see what will be next. I decided to start working on the chapter that talks about using IronRuby with Asp.NET MVC next. Jimmy Schementi and Phil Haack created a proof of concept implementation a couple of months ago that actually did work. The past weekend I’ve been looking to build on the excellent work they did and to build a more complete integration. In this post I’ll try to explain what I did to make it work. The integration work is far from complete, so if you’ve got some free time on your hands and you happen to be looking for an Open Source project to help with then this could be a candidate for you :).

Finding a place to start

Let me start off by saying that I’m pretty happy with the internals of the asp.net mvc framework. The code was easy to read given that you start in the correct file and work your way through much in the same way a request would be executed. In my case I started at the MVC handler and immediately you see one of the classes that we’ll definitely need to customize. The RubyControllerFactory is the class in question and it needs to be customized because we’re going to use a RubyController. ASP.NET MVC internally uses reflection to do its magic. In the futures project they have a couple of other implementations like async with reflection and so on. I decided to use the classes prefixed as Reflected as my guide for creating my own integration they were probably the simplest implementation of the class. I kept the view engine Jimmy and Phil created and focussed on the controllers. Working with the DLR API’s requires a bunch of classes and

Sweet now what does this mean in terms of IronRuby integration?

To limit some of the work I needed to be doing I decided that Ruby controller actions don’t take any parameters we can bind to instead you will have to rely on the data that’s available in the params hash to get to the input delivered by the request. Actually I made a decision to keep that from the POC implementation before.
In ASP.NET MVC there is a ReflectedControllerDescriptor and a ReflectedActionDescriptor. They are used to cache the information we need so it only has to perform the costly operations once, which is a good strategy IMHO. For IronRuby that means we’ll need to create a RubyControllerDescriptor and RubyActionDescriptor. The last class we’ll going to need to customize in this exercise is the ControllerActionInvoker which as the name hints at: invokes actions on your controller :) For people that have been doing rails applications, you’re not limited to Rails now. You could use Ruby but leverage the ASP.NET MVC infrastructure for implementing a mvc web app. When somebody would create the adapters for activerecord to leverage ADO.NET to talk to data sources you should be able to just use active record that comes with the rails framework in your app as models. The view engine in ironrubymvc is also erb based so I’d imagine you would be able to just copy your view code in and making sure that you have replacement helpers if you’ve used helpers. Working on this code also opens up the question if it isn’t possible to actually run rails via a similar mechanism… mmm must investigate

Where can I find it?

I forked the git repository from Jimmy Schementi. And I do send him pull requests when I’ve pushed some changes. So you could potentially pick that repository to work out of. The disadvantage is that you won’t pick up changes I make immediately. The good thing is that Jimmy’s repo is probably a good place to follow because he can also take work that Phil did and add it to the repo. I will then have to sync my version with his. Or you could use my repo and pick up the changes I make immediately but you’ll have to wait to merge it with the changes that have been applied into Jimmy’s repo until I get around to merging that into mine. I’d say that over time it would probably be a better idea to get the repo from Jimmy while mine will be very active but just for a short period of time, when I’m happy with it I’ll move on.

What’s left to do?

  • Implement action filters (before/after)

  • Implement authorization filters

  • Implement an HttpApplicationBase class that will create the script runtime

  • Implement a HttpModule that will take care of creating a RubyEngine object

For today I’d say go_to(“http://github.com/casualjim/ironrubymvc”).play.create.have_fun

When my work stabilizes a little bit more I’ll write a blog post explaining how I went about using the DLR hosting API’s to host IronRuby in an ASP.NET application and how the implemenation of IronRubyMvc was put together.

kick it on DotNetKicks.com



Participating in the Italian Alt.NET User Group

I just finished my talk at the Italian Alt.NET conference. There were the following topics of discussion:

  • Domain Driven Design

  • User stories & planning game

  • Advanced Unit Testing in the real world

  • Acceptance testing (Fitness)

And of course my topic was IronRuby

Because of the level of my italian or better yet the lack thereof I couldn’t participate in many of the discussions. IMHO that was a pitty because I actually do like having discussions about programming and designing applications.

My talk went alright judging by the reactions of the people that listened to my talk. Simone filmed the whole day and told me he would put up the videos on vimeo for all to see. What I thought was particularly good for IronRuby is that there definitely interest in for using IronRuby. The most obvious places for people to start using ruby are RSpec (when it works OTB) and rake as a replacement for nant or msbuild scripts. Of course if you ask me then there are plenty of other reasons to use IronRuby like Silverlight and WPF.

For the people that are interested in my presentation you can download it from google code. Most of the code that I’ve showed is included in the presentation as notes. All in all I had a great time and I hope the italian community will invite me again some time :). Italy has the benefit that they have great food and wine and that makes it very easy to convince me to take the plane :)

I couldn’t show everything because I got carried away at one point and lost track of time. So I had to drop my demo’s about using bacon (as a replacement for RSpec) to start writing specs for your .NET code today. I also wanted to show some of the stuff Jimmy Schementi did with Silverlight and IronRuby. Like agdlr and the integration for Ruby on Rails he created with the silverline plugin. But unfortunately I ran out of time before I could show off some of those things.

I had the opportunity to talk to the Italian member of the Mono team, Massimiliano Mantione. And this is what I love about conferences, they are full of interesting people with all kind of ideas. When I go to an event like TechEd or something I generally don’t actually attend many sessions because most of that content is available online anyway. Instead I will roam the hallways etc in search of good conversations and intersting people. Meeting the member of the mono team made me slightly envious because he’s doing what I would love to be doing too :) He’s working from home and getting paid to work on FOSS. He mentioned some intersting stuff the mono guys are doing and explained from a high level how they got the C# eval to work.

Some other notable facts about how they did the conference, which I liked a lot. The conference is free to attend, but they did have the possibility to accept donations. It were those donations that paid for my flight over here :). The way they organized the conference was in an Open Spaces format which is very open for discussion and they were completely transparent as to how the money had been spent. They still had some money left and donated that to an open source project. The open source project was chosen through voting.

The open source projects on the list of possibilities were:

In the end it was Rhino.Mocks that won the vote and they have received the donation.

I would like to thank the organizers of the conference for having me and the people that followed my talk for not falling asleep.



A New Year… Some Changes

With 2009 starting, started actually, it might be time to look ahead at what’s to come this year.

I hope your holidays were better than mine with my grandfather dying on Christmas eve, I wasn’t in much of a celebratory mood this year.

After having tried being a consultant for a while I have a serious hang-over from enterprise style of development. At least the dev style that only listens to what microsoft has to say and swears by their judgment under the motto: “You don’t get fired for buying Microsoft”. As if it wasn’t bad enough all the CRUD went through stored procs over linq-2-sql. When somebody there told me to copy/paste instead of taking a little bit more care I made up my mind and left the place. This leaves me at the start of this year without a project/job, and as it looks now it might not be the best position to be in with the crisis and all.

Another area that I desperately need to make some progress in is the IronRuby in Action book. So far I have 4 chapters completed and the one on Rails is about half-way there. Because I’m not making as much progress as I initially thought. This partly because I decided to turn my life upside down this year. Now that I’ve finally found a good place to live and my personal life isn’t as messy as it used to be I’ve returned to writing.

More news on the IronRuby in Action front is that I’ve got a co-author now. His name is Michael Letterle and he has contributed to the IronRuby project.  Michael is very passionate about Ruby development and is currently working on the Silverlight chapter of the IronRuby in Action book.

As part of the Chapter on Rails I’ve built a twitter clone. In the wpf chapter I created a twitter client and to be ensure things continue to work both offline as online it seemed like a good idea to me to create the server side too.  The last couple of days I’ve been implementing this limited version and you can find it at http://codeplex.com/mocktwitter. Finishing this application is on my to-do list for this year for now it does a little bit more than it needs to for the samples from the WPF chapter to work.

More on the IronRuby subject. I’ve also created a DBI layer for ADO.NET that you can use in conjection with IronRuby to talk to ADO.NET data sources. I don’t know yet if I will base my activerecord adapters on this DBI layer or just with the providers immediately. I put a post up on how to get started and where to get the sources etc on rubydoes.net

I intend to spend some time on agdlr as well as on ironnails as well because ironnails has been a lot of fun to develop.



Building IronRuby With Mono on OSX

This is a duplicate post of http://rubydoes.net/2008/12/30/building-ironruby-with-mono/

IronRuby comes in 2 flavours of SCM and apparently also with 2 flavours of project layout.

I spend most of my time on the mac and I wanted to be able to test ironruby stuff on it too. I tried to build IronRuby on my mac which doesn’t work straight away you have to patch it a little to make it work.

If you’re using the svn version then you can use the patch created by Seo Sanghyeon.

Michael Letterle has forked the ironruby git repository and created a branch called linux. You can find it on github.

For me the linux branch didn’t want to work and so I forked the ironruby repository too, created a branch called mono. My version is also up on github.

If you’re going to use the git layout of IronRuby, which is practically the same as what the IronRuby team uses at Microsoft. You’ll have to set an environment variable MERLIN_ROOT . I use a .bashrc file and added the following line to it:

export MERLIN_ROOT=’/Users/ivan/src/ironruby/merlin/main’

Now how do you get to those git branches?

Start a terminal session.

I have a directory src in which i download and compile sources. So I navigated into that directory src.

cd src
git clone git://github.com/casualjim/ironruby.git
cd ironruby
git checkout -b mono
git pull origin mono

Compiling IronRuby

cd merlin/main/Languages/Ruby
rake compile mono=1


You will also need a version of ruby installed, rake and the pathname2 gem

It won’t work with the latest release of mono. I’m using the trunk version of mono to build ironruby. I’ve got instructions that could show you how to compile mono here.

To top