Dealing with conflict in SVN

Okay, you have finished working on a cool new feature in your app. It works perfectly on your machine, and you can’t wait to let the world know about it. But first, you should commit your work to the central SVN server so the other team member could see it.



In SVN, you do a commit operation to send your local changes to the central repository. Generally, my rule for commit is to do it as often as I could. Of course, to avoid upsetting the other team member, I must also ensure that the code I commit is compilable.

Why I have to do it frequently, you ask? The rule of thumb is, whoever commit first, wins. If someone happens to commit before you do, instead of the usual success message, you may ends up with something like this :




Ok, because we ‘lose’ in the commit ‘race’, we have to do an update. Just right-click, update. How bad could that be?

You do an update to get the latest version from the central repository, and merge them with your local changes. The result could be one of the following :

no remote changes

remote changes exist
no local changes


local changes exist modified merged / conflicted

If you have updated a file, and the server also has changes on the same file, you may be experiencing something called *drum roll please* conflict.


Conflict is the worst thing could happen after an update. It means your local and the server both have changes, and the SVN doesn’t know how to deal with it (if it does, it will be automatically merged).

A conflict basically means SVN asks for your help to merge the changes. In order to assist you resolving the conflict, SVN has kindly insert a conflict marker on the conflicted files. Well, the intention is good, but as the saying goes, the road to hell is paved with good intentions.

In my earlier time working with SVN, I realized that there is an option labeled resolved. I happily click it, and the conflict suddenly gone. The file is now marked as modified, and ready to be included in a commit.

That’s it? Well, if you really look at the content of the file, you’ll realize that it is quite a mess. Your local changes is still there, so are the remote changes from the server, plus the conflict marker inserted by SVN is also there. That is definitely not what I want.


The correct way to handle the conflict is by using the Edit conflicts menu. You’ll see three panes, one for the remote changes (theirs), one for your local changes (mine), and the large one is for the merged version.


Also, on the indicator in the left, look for a red sign. That’s where the conflict is.

You have the option to discard one version, and keep the other, or keep them both in a particular order. Basically, you are in control now.


Once you are happy with the merged version, you can safely mark the file as resolved (image ). Now you can confidently commit your changes.

PS : For a more comprehensive guide on SVN conflict, you can go here.


Working with Oracle in .NET – part IV : CRUD

So far, this is what we have done :

  1. Creating a schema in SQL Server, identical with our Oracle schema using SSMA.
  2. Generate the DAL using T4 template and the identical schema in SQL Server.
  3. Provide a parameter-less constructor to simplify the usage of the DAL.

Now it is time to actually using the DAL to work with our Oracle database. For examples in this article to work, you will need to include the BLToolkit.Data.Linq namespace in your using clause.

using BLToolkit.Data.Linq;


Use the following method if your table have an auto-increment column as its primary key, and you want to retrieve the newly inserted identity :

MyTable newData = /* get your new data somewhere */;
var newId = Convert.ToDecimal(db.InsertWithIdentity<MyTable>(newData));

Change the conversion part to match the type of your identity column.

Please note though, the above method generates an SQL statement that sets the value of the primary key column too (while actually we want the RDBMS to generate the value for us). Depends on your database design, this may lead to an undesirable result.

For my case, the above method works fine because in my database, the primary key will always auto-generated by Oracle, effectively discarded the value provided by the user (it explained here). If you don’t have an equivalent setup in your database, you better use the following alternative :

var newId = Convert.ToDecimal(db.MyTable.InsertWithIdentity(() => new MyTable { 
    /* populate all the column here, excluding the identity column */
    Description = "new item",
    Price = 2000

The disadvantage of the above approach is you must populate the new object once again inside the object initializer, skipping the identity column. I’ve seen someone complaining about this, and apparently it can be avoided by decorating the identity column with NonUpdatable attribute. However, I haven’t tried that approach yet.

Bulk Insert

If you need to insert multiple items at once, use the InsertBatch method :

List<MyTable> newItems = /* get your new items somewhere */;

Update and Delete

Previously, I’ve been using Linq to Sql (L2S) as my ORM. In L2S, as far as I know, you need to select the data before you can update or delete them. The similar pattern is possible using BLToolkit :

/* single item */
MyTable data = /* get your data somewhere */;

/* multiple items */
List<MyTable> list = /* get your data somewhere */;

However, for some cases, this pattern is far from efficient (you can read more about it here or here). Fortunately, BLToolkit has provided update / delete operation using predicate, effectively providing us with an efficient bulk operations :

/* update the status of all expired items */
db.MyTable.Update(p => p.ExpiredDate.Date < DateTime.Now.Date, p => new MyTable { Status = "Expired" });/* delete all expired items */
db.MyTable.Delete(p => p.Status == "Expired");


IMHO, BLToolkit have an excellent LINQ support. Standard every day operations like filtering, sorting, and paging is quite straightforward :

/* fluent style */
var data1 = db.MyTable.Where(p => p.Description.Contains("keyword")).OrderBy(p => p.Description).Skip(10).Take(10);                /* query style */
var data2 = (from p in db.MyTable where p.Description.Contains("keyword") orderby p.Description select p).Skip(10).Take(10);

BLToolkit also has association feature that provide a neat alternative to LINQ’s join. You can read more about it here. Of course, if you really need to, you can also doing the join operation using LINQ too. For more information of other possibilities, you can read here or here.

You can also use other library for LINQ like LinqKit and their PredicateBuilder for more filtering options, or Dynamite to provide a dynamic sorting capabilities.

Custom Query

Apart from generating the SQL for us, BLToolkit also allowed us to execute a custom query. I personally use this feature when I was working with Sphinx.

using (var db = new SphinxDbManager())
    var query = string.Format(
        "select catalogid from catalog where match('@value \"{0}\" @tag 245 @subruas a') group by catalogid limit 0, 1000000 option max_matches = 1000000",
    return db.SetCommand(query).ExecuteList<CatalogDocument>().Select(q => q.CatalogID).ToList();

Stored Procedure

Frankly speaking, I don’t use the keep-it-in-stored-procedure development style. But from what this article says, it seems good enough. If you need stored procedure support, maybe you can use the article as a starting point, besides the BLToolkit documentation itself.

Debugging the SQL

Ever wondered what kind of SQL produced by BLToolkit? The DbManager class has a property called LastQuery. This property contains the last query sent to the database server. As an Oracle newbie, I learned a lot of PL/SQL from this property 🙂

The SQL in DbManager’s LastQuery is still in a parameterized form. If you want to know the value of the parameters, you should store the IQueryable in a variable. The IQueryable has a SqlText property, where you can see the value assigned to the parameters (in a commented form), beside the query itself. This way, you can get a deeper insight of what happened under the hood.


Working with Oracle in .NET – part III : DbManager

After successfully compiled our generated DAL, now it is time to use the DAL to actually access our Oracle database from our .NET applications.

Add reference to BLToolkit Oracle data provider


Modify your configuration file

This is what mine looks like :

    <section name="bltoolkit" type="BLToolkit.Configuration.BLToolkitSection,   BLToolkit.4" />
      <add type="BLToolkit.Data.DataProvider.OdpDataProvider,     BLToolkit.Data.DataProvider.Oracle.4" />
    <add name="MyConnectionString" connectionString="Data Source=localhost/xe;User ID=user;Password=password;" providerName="Oracle.DataAccess.Client" />
  </connectionStrings><!-- other entries

Tips : you can easily accomplish both step by simply installing the library from NuGet.


Usage example

A typical usage of the DAL would be something like this :

using (var db = new MyDbManager("MyConnectionString"))
                     * do something with the database

Simplifying the usage

Do you notice the parameter supplied to the DbManager’s constructor? It is the name of the connection string we’ve added to the web.config in the earlier step. It has to be done every time we instantiate the DbManager, which is very often in my case. To me, magic string like this is not a good thing.

To avoid supplying an arbitrary string like this every time we need to access the database, we can add a parameterless constructor to the DbManager. But since this class is auto-generated, modifying the class directly is not a good option. Instead, as the DbManager is declared as partial, we can add another partial class to achieve this. This is what my class looks like :

public partial class MyDbManager : DbManager
        public MyDbManager()
            : base("MyConnectionString")

Ensure you specify this class in the same namespace with the generated DAL.

With this addition, we can now create the DAL without supplying any parameter.

using (var db = new MyDbManager())
                     * do something with the database

Another advantage of this approach is, if the connection string name needs to be changed in the future, we only need to modify a single string in our code. Handy, isn’t it?

On the next article I will cover about doing basic CRUD operation against our Oracle database.

Working with Oracle in .NET – part II : T4

In the previous article, we have successfully generate an SQL Server database which has identical properties to our Oracle schema. With this SQL Server database in our disposal, generating the data access layer (DAL) is a trivial task. BLToolkit has provided a set of T4 template to achieve this. Basically here I’m only rephrasing the steps documented here by the BLToolkit team themselves, adding some adjustments as necessary to fit my needs.

Generating the DAL

  1. Download BLToolkit source code, and unzip it somewhere.
  2. Start Visual Studio, create a new class library project.


  3. Add everything under the Source\Templates folder of BLToolkit source code into your project.


  4. Add a new T4 file (*.tt) into your project.


  5. Edit your new T4 file into the following :
    <#@ template language="C#" hostspecific="True"        #>
    <#@ output extension=".generated.cs"                #>
    <#@ include file="BLToolkit.ttinclude"                #>
    <#@ include file="BLT4Toolkit.ttinclude"            #>
    <#@ include file="MSSQL.ttinclude"                    #>
    <#@ include file="Renamer.ttinclude"                #>
    <#@ include file="ValidationAttributes.ttinclude"    #>
    <#@ include file="WCFAttributes.ttinclude"            #>
        ConnectionString = "Data Source=.;Initial Catalog=your_database_name;Persist Security Info=True;User ID=sa;Password=your_password";
        foreach (var t in Tables.Values)
            //we don't normally select ROWID from our Oracle tables
            if (t.Columns.ContainsKey("ROWID"))
        }    GenerateModel();

    Needless to say, you should modify the connection string to match your SQL Server database setting.

    Here is some list of what the include files are good for :

    1 BLToookit.ttinclude Required.
    2 BLT4Toolkit.ttinclude Optional. You need T4 Toolbox to use this file. This is useful to automatically set the namespace and the class name of the generated class to your project name and your T4 file name respectively. If you omit this, you should specify them manually using Namespace and DataContextName variable.
    3 MSSQL.ttinclude Required. Used to connect to the SQL Server database.
    4 Renamer.ttinclude Optional. I use this to re-style the naming convention of Oracle database (using all-capital and underscore as word separator) to .NET name styling (using PascalCase).

    A little note, I have to modify this file a bit so it produce the result I desired. If you are wondering what changes I made, it was on the 10th line of the file, I change the name to name.ToLower(). And also, I added a few code to rename the foreign keys too. You can find the complete code here.

    5 ValidationAttributes.ttinclude Optional. Use this if you want to decorate your class with data validation attribute.
    6 WCFAttributes.ttinclude Optional. Use this if you want your class ready to be exchanged using WCF technology. It adds DataContract and DataMember attribute to your class. You will need a reference to System.Runtime.Serialization assembly in your project in order to successfully compiled the generated class.
  6. By now, your T4 files should ready for action. Open the properties window for the file, and set the custom tool to TextTemplatingFileGenerator.


    You should see this dialog box :


    Click OK, and your class should be generated.

  7. Done, at least for the generation part :). You should see the generated class right under the *.tt file.


    The next time you need to re-generate the class, you could right-click on the *.tt file, and select Run Custom Tool.


  8. To make our newly generated class successfully compiled, we need to add a reference to the BLToolkit assembly in our project. Personally, I do this using NuGet to ensure I got the latest version.

    If you opted to include WCFAttributes.ttinclude in your T4 file, you will also need a reference to System.Runtime.Serialization assembly.


  9. Now we are really done. To verify this, try compiling your project.

Solving the issues

Some problem might occurred, and your class might still not be compilable. You should inspect your compiler error messages to know what the problems are, and try to solve them accordingly.

You got two options to fix an uncompilable class. You can edit the class manually. This is simpler, but you have to repeat this step every time the class is re-generated. The more sophisticated one is by updating the T4 to fix the problem. This way, you get a more permanent fix.

Once everything is set up this way, our workflow is much simpler now. If you ever need to update your Oracle database definition in the future, synchronize the SQL Server database using SSMA for Oracle, and simply run custom tool on your T4 file to update the DAL.

On the next article, I will cover on how to utilize our generated data model class in a .NET application.

Working with Oracle in .NET – part I : SSMA

There are many ways to connect to an Oracle database from a .NET application. You can use ADO .NET directly, or use an ORM solution, such as NHibernate or the likes. In my workplace, previously we are using NHibernate. But for some reason, we stop using it and revert back to the old ADO .NET way.

However, after some experiments, I’ve found my own way to work with Oracle in a .NET environment. I’m using a library called BLToolkit, along with several other tools. After reading their documentation, I’m convinced enough to use it, and so far I’m very satisfied with the result.

Our development approach is database-first, and this article will only cover that. So if you are using code-first approach, this article would not be relevant to you.

The first thing we need is a data model class that maps to our Oracle database. Since we are already spending our precious time designing the database, we want the class to be auto-generated from the database. BLToolkit provides this kind of functionality using a T4 template. Unfortunately, they don’t support generating data model class directly from an Oracle schema. Since I’m not capable enough to write a T4 template to achieve this, I need a workaround.

My workaround here, is by using SQL Server and a utility called SSMA for Oracle. The idea here is by importing our Oracle database to an SQL Server database, and generate a data model from there. For this purpose, the SQL Server Express Edition is sufficient, so you don’t have to worry about licensing issues.

Importing the database

  1. Open SSMA for Oracle and create a new project.clip_image001
  2. Click Connect to Oracle, and provide the information to connect to the Oracle server.


    Depends on what user you use to connect to Oracle, you might be shown this warning. In my case, I can safely ignore it and click Continue.


  3. Click Connect to SQL Server, and provide the information to connect to the SQL Server.


    If you are connecting to SQL Server Express, like me, you will get this warning. Just ignore it and click Continue.


  4. Optionally, you can change the type mapping between Oracle and SQL Server. I usually change the mapping of number to decimal instead of the default float. The reason for this is because I usually use number as a primary key in Oracle. If it is mapped as float in SQL Server, then it would lost its primary key status because SSMA doesn’t seems to allow float to be used a primary key.


  5. If you made a change to the Oracle schema after connecting to Oracle, you can refresh them by right-clicking the schema name and select Refresh from Database.


  6. Tick the checkbox next to the schema(s) you want to import, and click Convert Schema.


    Optionally, you could also change the target schema in SQL Server.

    While the process is running, you can inspect it through the Output window.

  7. On the SQL Server Metadata Explorer, you’ll see the target schema produced by the conversion process. Right click on it, and select Sync with Database. This process will actually update our SQL Server with the new definition.


  8. Done. Now you should be able to see your shiny new database (or an up-to-date version of the existing ones) in SQL Server.

Handling the problems

You may encounter some issues while doing this conversion process (you can see them in the Error List Window). Once, I got an error setting a float column as a primary key. Changes the type mapping of number from float to decimal (as described in step 4 above) solve this.

However, some issues might not be so important to us. Keep in mind, the main reason we are doing this process are merely to enable us to auto-generate our data model class. So as long as the SQL Server tables has the same column name, column type, primary key, and foreign key specification as their counterpart in Oracle, then it should be good enough for us.

On the next article, I will cover on how to generate the data model class using the T4 template provided by BLToolkit.

Oracle Auto-Increment Made Easy

One of the tedious task I have to deal with when working with Oracle is when I need to create an auto-increment column. At first I was expecting this kind of task can be accomplished easily, just like in MySQL or SQL Server, but apparently things are a little different in Oracle.

For those who doesn’t know it yet, auto-increment column in Oracle is done using a trigger and a sequence. Compared to IDENTITY in SQL Server or AUTO_INCREMENT in MySQL, I found this a little bit too demanding 🙂

Fortunately, there is a way to automate this task using a front-end application for Oracle called Oracle Sql Developer. I’m using it on my Windows machine, but since it is Java-based, it should be available too in Linux or Mac. Here’s how I do it :

  1. Create a new table.


  2. Give the table a name, and make sure the Advanced check box is checked.


  3. Give the identity column a reasonable name. You should also specify an appropriate type for this column.


  4. You might also want to specify the identity column as a primary key. Go to the primary key menu, and ensure the column is placed on the right-most list box.


  5. Now for the auto-increment itself. Go to the column sequences menu, pick the identity column, and check the box that says populate column bla bla bla.


    To ensure that the value of this column is always auto-generated by the system, uncheck the checkbox at the bottom.

    Oh, one more thing. This menu is only available for new table. That’s why you should follow this instruction from the first step 🙂

  6. Go back to the columns menu, and complete the column specifications. When you happy with your new table, click OK to save the table. Alongside with the table, you will also get a trigger and a sequence for the auto-increment mechanism.

Now you can have an auto-increment column without having to manually create the trigger and sequence. I hope you will find this useful.