Friday, December 11, 2009

Unit Testing With Compact Framework and Visual Studio

Following up my issue with running NUnit tests for the Windows Mobile application, I came across a couple of articles on using the unit testing framework integrated in Visual Studio 2008 which is now supposed to be user friendly.
The process starts with selecting the function name, right-clicking on it and selecting "Create Unit Tests"

I can select the functions I want unit tests to be created for - I'll only choose one for now

I am then prompted for the name of the project where my tests will be created. Visual Studio adds a new project to the solution and this is the code for the test method created for the function I chose.


///
///A test for CreateDatabase
///

[TestMethod()]
public void CreateDatabaseTest()
{
DataBase target = new DataBase(); // TODO: Initialize to an appropriate value
target.CreateDatabase();
Assert.Inconclusive("A method that does not return a value cannot be verified.");
}

This is great, except that I want to test for the things I want to test. So, of course, I need to change that. That's probably closer to what I want to test in my method:


[TestMethod()]
public void CheckDatabaseCreation()
{
DataBase target = new DataBase();
target.SetFileName(@"\Program Files\TestDB\TTrack.sdf");
target.SetConnectionString(@"Data Source=\Program Files\TestDB\TTrack.sdf");
target.DeleteDatabase();
target.CreateDatabase();
target.RunNonQuery(target.qryInsertRecord);
int count = target.RunScalar(target.qryCountUsers);
Assert.AreEqual(count, 1);
}

This is not so much different from the way tests are created in NUnit. In fact, so far there is no difference at all. Now, to run the test. There is a menu item "Test" in the top menu where I can select Test->Windows->Test View and the "Test View" becomes visible.

There I can see my tests - the auto generated one and the one I added myself.


I can run all tests or select any combination of tests I want to run from the Test View and choose either "Run Selection" or "Debug Selection" (I did not find out yet what the difference is - if I place a breakpoint inside the test method and choose "Debug Selection", the execution does not break at the breakpoint). After the test(s) finished running, I can see the result in the Test Results window.

by . Also posted on my website

Tuesday, November 24, 2009

Compact Framework and NUnit

I have the idea of a small application I could write for the Windows Mobile. The application will only use its local SQL Server database, at least initially, and it is really simple to create a local database on the device. The only things I need are the physical location of the sdf file on the device and the connection string.

In my "DataBase" class I generate them

private string GetLocalDatabasePath()
{
string applicationPath = Path.GetDirectoryName(this.GetType().Assembly.GetName().CodeBase);
string localDatabasePath = applicationPath + Path.DirectorySeparatorChar +
"TTrack.sdf";
return localDatabasePath;
}

private string GetLocalConnectionString()
{
string localConnectionString = "Data Source=" +
GetLocalDatabasePath();

return localConnectionString;
}

To create a database I just check if the database file already exists, and if not - I create it. Also, for testing purposes, the delete database function is used.

internal void CreateDatabase()
{
if (!File.Exists(localDatabasePath))
{
using (SqlCeEngine engine = new SqlCeEngine(localConnectionString))
{
engine.CreateDatabase();
}

RunNonQuery(qryCreateTables);
}
}

internal void DeleteDatabase()
{
string dbPath = GetLocalDatabasePath();
if (File.Exists(dbPath))
{
File.Delete(dbPath);
}
}

The RunNonQuery bit is just the creation of tables in the database.

internal void RunNonQuery(string query)
{
string connString = GetLocalConnectionString();

using (SqlCeConnection cn = new SqlCeConnection(connString))
{
cn.Open();
SqlCeCommand cmd = cn.CreateCommand();
cmd.CommandText = query;
cmd.ExecuteNonQuery();
}
}

The query for now just creates the simplest possible "dummy" table

internal string qryCreateTables = "CREATE TABLE Users (" +
"UserID uniqueidentifier PRIMARY KEY DEFAULT NEWID() NOT NULL, " +
"Name NVARCHAR(50) NOT NULL )";

The RunScalar, obviously, is used to run ExecuteScalar(). Some refactoring still required to improve the code.

internal int RunScalar(string query)
{
string connString = GetLocalConnectionString();

using (SqlCeConnection cn = new SqlCeConnection(connString))
{
cn.Open();
SqlCeCommand cmd = cn.CreateCommand();
cmd.CommandText = query;
return int.Parse(cmd.ExecuteScalar().ToString());
}
}

Now that I can create a database, I can run this simple bit of code to see if it is working.

DataBase db = new DataBase();
db.CreateDatabase();
db.RunNonQuery(db.qryInsertRecord);
MessageBox.Show(db.RunScalar(db.qryCountUsers).ToString());

Database gets created, a record gets inserted, a messagebox with "1" is shown. All is well.

Next, I decide to quickly create and run a simple test for database creation: If the database is present, I delete it, then create a new one, insert one record and test for the count of records indeed being one.

Here is the test I write in NUnit.

[Test]
public void CheckDatabaseCreation()
{
DataBase db = new DataBase();
db.SetFileName(@"\Program Files\TestDB\TTrack.sdf");
db.SetConnectionString(@"Data Source=\Program Files\TestDB\TTrack.sdf");
db.DeleteDatabase();
db.CreateDatabase();
db.RunNonQuery(db.qryInsertRecord);
int count = db.RunScalar(db.qryCountUsers);
Assert.AreEqual(count, 1);
}

This does not go as well, however:

What happened there? Oh, of course - the NUnit test runs on the desktop, but the code is supposed to run on the emulator (I don't use the actual device yet). So, it looks like I will have to work on the approach to testing ...

by . Also posted on my website

Thursday, November 19, 2009

Compact Framework and forms management.

This was my first experience with the Microsoft Compact Framework. The application is developed in Visual Studio 2005 and is targeted to run on Pocket PC 2003 devices. Basically, I had to extend a simple application that had only one form to add some functionality and a few more forms. I understand that the forms in Compact Framework are treated a bit differently compared to a desktop application. What to use for navigation between forms, Show() or ShowDialog()? I decided to use Show() because I have only about 5 forms, most of those are very simple and also, my application will be the only one running on the device. So I thought, if I create each form once and keep them all in memory, just showing and hiding them, it may use more memory, which I do not care that much about, but be easier on the device battery. Okay, I may be saying total nonsense here - I have about 7 days Compact Framework development experience at this very moment.

So I have a dictionary where all existing forms are kept.

private static Dictionary _applicationForms = new Dictionary();

And the function that gets the form from the dictionary by name.

internal static Form GetFormByName(string formName)
{
if (_applicationForms.ContainsKey(formName))
{
return _applicationForms[formName];
}
else
{
Form newForm = CreateFormByName(formName);
AddFormIfNotExists(newForm);
return newForm;
}
}

And the function to create a form if it has not been yet created.

private static Form CreateFormByName(string name)
{
Form form = new Form();

switch (name)
{
case Constants.frmFirst:
form = new frmFirst();
break;

...

case Constants.frmLast:
form = new frmLast();
break;
default:
form = new frmLast();
break;
}
return form;
}

And the function to add the form to the dictionary if it is not there.

internal static void AddFormIfNotExists(Form frm)
{
if (!_applicationForms.ContainsKey(frm.Name))
{
_applicationForms.Add(frm.Name, frm);
}
}

And when I need to show another form, I get it from the dictionary and show, and hide the current form.

internal static void ShowFromForm(Form source, string targetName)
{
Form frm = GetFormByName(targetName);
frm.Show();
source.Hide();
}

There's a bit more to it, sometimes I need to find which form is currently visible etc, but these are the core things. Stupid? Good enough? I don't know ...

by . Also posted on my website

Saturday, October 3, 2009

Doing Some Stuff on Another Computer

It is fairly easy to restart a service running on a remote computer. You only need to know two things - the name of a remote computer and the name of the service itself. No surprises.

private void RestartService(string MachineName, string ServiceName)
{
using (ServiceController controller = new ServiceController())
{
controller.MachineName = MachineName;
controller.ServiceName = ServiceName;
controller.Stop();
controller.Start();
}
}

Almost as easy is to monitor the printers on the remote computer using WMI. This time, only the remote computer name is required. Here's a small function that returns the list of CustomPrinterObjects. CustomPrinterObject can be defined like this, for example:

public class CustomPrinterObject
{
private string _name;

public string Name
{
get { return _name; }
set { _name = value; }
}

//many other properties
....

private string _status;

public string Status
{
get { return _status; }
set { _status = value; }
}
}

Here's how I get the information about the printers on the remote computer:

public List GetLocalPrinters(string serverName)
{
string[] pStatus = {"Other","Unknown","Idle","Printing","WarmUp","Stopped Printing",
"Offline"};

string[] pState = {"Paused","Error","Pending Deletion","Paper Jam","Paper Out",
"Manual Feed","Paper Problem", "Offline","IO Active","Busy",
"Printing","Output Bin Full","Not Available","Waiting",
"Processing","Initialization","Warming Up","Toner Low",
"No Toner","Page Punt", "User Intervention Required",
"Out of Memory","Door Open","Server_Unknown","Power Save"};

List printers = new List();

string query = string.Format("SELECT * from Win32_Printer");
ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);
searcher.Scope = new ManagementScope("\\\\" + serverName + "\\root\\CIMV2");
ManagementObjectCollection coll = searcher.Get();

System.Windows.Forms.MessageBox.Show(coll.Count.ToString());

foreach (ManagementObject printer in coll)
{
CustomPrinterObject prn = new CustomPrinterObject();

foreach (PropertyData property in printer.Properties)
{
if (property.Value != null)
{
switch (property.Name)
{
case "Name":
prn.Name = property.Value.ToString();
break;
case "Comment":
prn.Comment = property.Value.ToString();
break;
case "PrinterState":
prn.PrinterState = pState[Convert.ToInt32(property.Value)];
break;
case "PrinterStatus":
prn.PrinterStatus = pStatus[Convert.ToInt32(property.Value)];
break;
case "Location":
prn.Location = property.Value.ToString();
break;
case "Type":
prn.Type = property.Value.ToString();
break;
case "DriverName":
prn.Model = property.Value.ToString();
break;
case "WorkOffline":
prn.Status = property.Value.ToString().Equals("True") ? "Offline" : "Online";
break;
default:
break;
}
}
}
printers.Add(prn);
}
return printers;
}

Reading the registry contents on the remote machine is very easy again.

On the local computer I would open the key like this

RegistryKey rk = Registry.LocalMachine.OpenSubKey(subKey);

And on the remote I would do it like this

RegistryKey hklm = RegistryKey.OpenRemoteBaseKey(RegistryHive.LocalMachine, "MyRemoteServer");
RegistryKey rk = hklm.OpenSubKey(subKey);

Obviously, all of the samples will work subject to permissions of the account that runs them. Errors will happen if the account does not have enough privileges to access the printers or services on the remote computer.

by . Also posted on my website

Thursday, October 1, 2009

Simple WCF client/server

So communicating between two windows services on the same computer is easy. But then I was asked, what if we decide in the future that we want these services to run on the separate machines? Well, I guess we'll need to make changes to both of them ... and that's exactly what we want to avoid. Okay, the WCF offers a few ways to host a service - in a managed application, in a managed windows service, in IIS, in WAS ... (Hosting Options) Since I already have windows services implemented, the choice is obvious. I looked up a couple of tutorials on the subject fairly quickly:
How to: Host a WCF Service in a Managed Windows Service, WCF Tutorial - Basic Interprocess Communication

However, that was not quite enough for me because the first tutorial's problem was that it explained the configuration file a bit, but did not implement the "client", and the second tutorial implemented both server and client, but had no info on configuration at all. So I quickly got to the point where I could have a server and client running inside separate windows services on the same machine, but as soon as I tried taking one of the services away - to another computer on the network - different errors started to happen. Not enough time and space on explaining each error and what was the reason for it, just a few words on what I ended up with (which eventually worked).

Service implementation

To define and implement the function on the server:

[ServiceContract(Namespace="MyNamespace.IMyInterface")]
public interface IMyInterface
{
[OperationContract]
string ReturnMyString();
}

public class MyInterfaceImplementation : IMyInterface
{
public string ReturnMyString()
{
return "My String";
}
}

To create the instance of the host, first define the host

private ServiceHost host;

In the service OnStart method

if (host != null)
{
host.Close();
}

host = new ServiceHost(typeof(MyInterfaceImplementation), new Uri[] { new Uri(http://MyServer:8080) });
host.AddServiceEndpoint(typeof(IMyInterface), new BasicHttpBinding(), "MyMethod");

In the service OnStop method

if (host != null)
{
host.Close();
host = null;
}

This part was fairly easy.

Service configuration

This bit went into the app.config file inside the "configuration".

<system.serviceModel>
<services>
<service name="MyNamespace.MyService" behaviorConfiguration="MyServiceBehavior">
<host>
<baseAddresses>
<add baseAddress="http://MyServer:8000/MyMethod"/>
</baseAddresses>
</host>
<!-- this endpoint is exposed at the base address provided by host-->
<endpoint address="" binding="basicHttpBinding" contract="MyNamespace.IMyInterface" />
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="MyServiceBehavior">
<serviceMetadata httpGetEnabled="true"/>
<serviceDebug includeExceptionDetailInFaults="False"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>

Note the service name attribute "MyNamespace.MyService" which is the windows service names, and the endpoint contract attribute, which is the ServiceContract Namespace attribute. Some small things are easy to get wrong, and the error messages will not be very informative.

Client implementation

[ServiceContract(Namespace="MyNamespace.IMyInterface")]
public interface IMyInterface
{
[OperationContract]
string ReturnMyString();
}

public string MyClientString()
{
string result = string.Empty;

string endpoint = "http://MyServer:8000/MyMethod";

ChannelFactory httpFactory = new ChannelFactory(
new BasicHttpBinding(), new EndpointAddress(endpoint));

IMyInterface httpProxy = httpFactory.CreateChannel();

while (true)
{
result = httpProxy.ReturnMyString();
if (result != string.Empty)
{
return result;
}
}
}

I missed the [ServiceContract(Namespace="MyNamespace.IMyInterface")] bit initially on the interface definition and the error message was really not helping. It went like that: "Exception: The message with Action 'http://tempuri.org/IMyInterface/ReturnMyString' cannot be processed at the receiver" and so on. What tempuri.org? I never pun any tempuri.org in my project! OK, turns out it is some default name that was used because I have not provided my own.

Client configuration

Just a small bit of configuration was required here (and I'm not even 100% sure that all of it is required)

<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="basicHttp"/>
</basicHttpBinding>
</bindings>
<client>
<!-- this endpoint is exposed at the base address provided by host-->
<endpoint address="" binding="basicHttpBinding" contract="MyNamespace.IMyInterface" />
</client>
</system.serviceModel>

Overall, it's just a few dozen lines of code, but it took me almost a whole day to get it working properly through the network.

by . Also posted on my website

Wednesday, September 16, 2009

A Small Unit Testing Gem

Since I started writing unit tests for my code, I had this question in mind. Let's say I have a project that is a class library. I have a class in that library and this class has some internal methods. Like this:

public class MyClass
{
public void MyPublicMethod
{
int k
// do something ...
int z = MyInternalMethod(k);
// do something else ...
}

internal int MyInternalMethod(int i)
{
// do something ...
}
}

Now I want to write unit tests for these methods. I would create a "UnitTests" project, reference the nunit.framework from it and write something like this:

[TestFixture]
public class UnitTests
{
private MyClass myClass;

[SetUp]
public void SetupTest
{
myClass = new MyClass();
}

[Test]
public void TestMyInternalMethod
{
int z = 100;
int k = myClass.MyInternalMethod(z); //CAN NOT DO THIS!
Assert.AreEqual(k, 100000);
}

[TearDown]
public void TearDown
{
myClass = null;
}
}

Of course, I can not do this, because of the MyInternalMethod scope. Today the StackOverflow guys pointed me to this little gem which is very helpful.

.Net Gem - How to Unit Test Internal Methods

Here's the short summary:

Go to the project that contains MyClass. Locate the AssemblyInfo.cs file. Add the following line to it:

[assembly: InternalsVisibleTo("UnitTests")]

Done!

by . Also posted on my website

Thursday, September 10, 2009

Thread Pooling

I have to take care of multiple printers in my application. The "Print Manager" receives a list of jobs which is basically an XML file of a simple structure - a number of PrintJob nodes. Each print job has a printer assigned to it.

The Print Manager has to send each job to the appropriate printer, and also notify the sender of the XML of the completion or failure of each job. I'm sure tasks like these are common but somehow could not find good suggestions on implementing this one. I found a Miscellaneous Utility Library though (written by Jon Skeet himself by the way) which implemented a class called "CustomThreadPool", which allows creating multiple thread pools in a .NET application.

So, my approach so far is as follows: Get a print job. If a pool exists for this printer, place the job in a thread in the pool. Otherwise, create a pool and place the job in a thread in this pool. Get next job.

Here is how it looks like so far:

private List _printerThreads = new List();

delegate Errors ThreadMethod(PrintJob job);

private Errors InsertThread(PrintJob job)
{
ProcessSinglePrintJob(job);
}

// stuff ...

public void ProcessPrintJobs()
{
if (_printJobs != null)
{
foreach (PrintJob printJob in _printJobs)
{
if(String.IsNullOrEmpty(printJob.PrinterName))
{
printJob.JobResult = Errors.PrinterNameNotSpecified;
}
else if (String.IsNullOrEmpty(printJob.ReaderName) && printJob.IsEncodeSmartCard)
{
printJob.JobResult = Errors.SmartCardReaderNameNotSpecified;
}
else
{
CustomThreadPool pool = _printerThreads.Find(delegate(CustomThreadPool test)
{
return test.Name == printJob.PrinterName;
});

if (pool == null)
{
pool = new CustomThreadPool(printJob.PrinterName);
}

ThreadMethod method = new ThreadMethod(InsertThread);

pool.AddWorkItem(method, printJob);
}
}
}
}

I don't have extensive experience with multithreading so this solution might not even work or it may be too complex for the task. I'll run some tests soon anyway with the actual printers.

by . Also posted on my website

Tuesday, September 1, 2009

Studying Interprocess Communication

Today I had to solve a simple problem. Let's say there are two processes running on one computer. The first service polls a database for print jobs. As soon as a job is found, a second service has to send the job to the printer. So, effectively, I have to pass some data from one local service to another.

The first, "amateurish" solution that came to my mind was to write data to a text file by the "polling" service and read from that file by "printing" service. But I thought that the task like this should be a standard one and looked around. Here's one of the examples I found:

.NET 3.5 Adds Named Pipes Support

Here's the probably the simplest working example: First, I need to create two windows services. I add a timer to each service. I also add an event log to each of the services to be able to check if they work. One of the services will be a "server". Here's what goes into it's timer_Elapsed:

using (NamedPipeServerStream pipeServer = new NamedPipeServerStream("testPipe", PipeDirection.Out))
{
pipeServer.WaitForConnection();

try
{
using (StreamWriter sw = new StreamWriter(pipeServer))
{
sw.AutoFlush = true;
string dt = DateTime.Now.ToString();
sw.WriteLine(dt);
pollingEventLog.WriteEntry(dt + " written by the server");
}
}
catch (IOException ex)
{
pollingEventLog.WriteEntry(ex.Message);
}
}

The other service will be a "client". Here's what goes into it's timer_Elapsed:

using (NamedPipeClientStream pipeClient = new NamedPipeClientStream(".", "testPipe", PipeDirection.In))
{
pipeClient.Connect();
using (StreamReader sr = new StreamReader(pipeClient))
{
string temp;
while ((temp = sr.ReadLine()) != null)
{
printManagerEventLog.WriteEntry(temp + " read by the client");
}
}
}

This is it - after both services are compiled, installed and started, their cooperation can be observed through the Event Log. Total time including googling, understanding the concept and implementing the working example - under 30 minutes.

by . Also posted on my website

Monday, August 24, 2009

Human Readable Entries For The Event Log

Looks like I've been a bit busy recently!
Anyway, just a little trick I used today to produce human readable messages for the event log, avoiding complex switches or if/else blocks.
First, I put all possible errors in the enum, including the "no error", like this:

public enum Errors
{
ProcessingCompletedSuccessfully = 0,
CouldNotEstablishConnectionToPrinter = 1,
...
GlobalSystemShutdownPreventedCompletingTheTaskInATimelyFashion = 999
}

The event log is created as usual

private System.Diagnostics.EventLog pollingEventLog;

if (!EventLog.SourceExists("MyHumbleSource"))
{
EventLog.CreateEventSource("MyHumbleSource", "MyHumbleService");
}
pollingEventLog.Source = "MyHumbleSource";
pollingEventLog.Log = "MyHumbleService";

The function should return the error code and the error code should be written to the event log

Errors error = PerformMyVeryComplexProcessing(XmlDocument data);
WriteErrorToLogFile(error);

Finally, a small function that does the important stuff:

private void WriteErrorToLogFile(Errors error)
{
string inputstr = error.ToString();
Regex reg = new Regex(@"([a-z])[A-Z]");
MatchCollection col = reg.Matches(inputstr);
int iStart = 0;
string Final = string.Empty;
foreach (Match m in col)
{
int i = m.Index;
Final += inputstr.Substring(iStart, i - iStart + 1) + " ";
iStart = i + 1;
}
string Last = inputstr.Substring(iStart, inputstr.Length - iStart);
Final += Last;

pollingEventLog.WriteEntry(Final);
}

I did not write the function myself - I got the solution from
Split words by capital letter using Regex

It takes the error, converts its name to string and splits the string by capital letters.
If the error returned was CouldNotEstablishConnectionToPrinter, then "Could Not Establish Connection To Printer" will be written to the event log.

by . Also posted on my website

Thursday, July 30, 2009

WebOS Development - First Steps

So, what is WebOS development about? First thing is to learn some 'Web OS speak'. The application begins with creating a 'stage' which is more or less like a main page for a website. The webpages, or what we call 'Forms' in Windows Forms, are called 'scenes'. These are html files. There is also code-behing for the scenes, it is javascript and called 'assistants' in Web OS speak. The command to generate the application template is

palm-generate -p "{title:'My Application Title', id:com.mystuff.myapp, version:'1.0.0'}" MyAppVersionOne

More on application structure here

Application Structure

So, my first app has two pages scenes. On the first one I can press a button and this will insert a record in a database table. Another button will take me to the second scene. On the second scene the database table records are displayed and the button that takes me back to the first scene. Pretty basic stuff. The scene markup is as simple as that

The add_button adds the record, the display_button takes me to the next scene. Other divs I use for debugging - they just display text. Now to the code-behind assistant.

function FirstAssistant() {
/* this is the creator function for your scene assistant object. It will be passed all the
additional parameters (after the scene name) that were passed to pushScene. The reference
to the scene controller (this.controller) has not be established yet, so any initialization
that needs the scene controller hould be done in the setup function below. */
this.db = null;
}

FirstAssistant.prototype.setup = function() {
this.CreateDB();
Mojo.Event.listen($('add_button'), Mojo.Event.tap, this.AddEntry.bind(this));
Mojo.Event.listen($('display_button'), Mojo.Event.tap, this.DisplayEntries.bind(this));
$('result2').update('debug comment');
}

Okay, you can read the comments. Define variables, subscribe to events, open or create the database - set to go! This is the CreateDB function by the way

FirstAssistant.prototype.CreateDB = function(){
try
{
this.db = openDatabase('SampleDB', '', 'Sample Data Store', 65536);
if(!this.db)
{
$(result).update('error opening db');
}
else
{
$(result).update('opened db');

var string = 'CREATE TABLE IF NOT EXISTS table1 (col1 TEXT NOT NULL DEFAULT "nothing", col2 TEXT NOT NULL DEFAULT "nothing"); GO;';

this.db.transaction(
(function (transaction) {
transaction.executeSql(string, [], this.createTableDataHandler.bind(this), this.errorHandler.bind(this));
}).bind(this) );
}
}
catch (e)
{
$(result).update(e);
}
}

WebOS uses Sqlite as a database engine. This is how a record is inserted:

FirstAssistant.prototype.AddEntry = function() {
var string = 'INSERT INTO table1 (col1, col2) VALUES ("test","test"); GO;';

this.db.transaction(
(function (transaction) {
transaction.executeSql(string, [], this.createRecordDataHandler.bind(this), this.errorHandler.bind(this));
}).bind(this)
);
}

This is how I move to the next scene:

FirstAssistant.prototype.DisplayEntries = function() {
this.controller.stageController.pushScene('second', this.db);
}

I pass the name of the scene I want to 'push' and then the parameters. On the second scene I will grab these parameters and use them. Almost as the first scene, but now I have the database open already so I pass it to the second scene.

function SecondAssistant(origdb) {
this.db = origdb;
}

Okay, displaying the results and formatting them are outside of the scope of this brief post. Also, I copied the code from one of the samples on developer.palm.com, the application is called Data and the code can be found under Data\app\assistants\storage\sqlite and Data\app\views\storage\sqlite. Here is how the scenes look on the emulator

by . Also posted on my website

Wednesday, July 29, 2009

WebOS Development

Quite a while ago (around 8 years) I was doing some development for PalmOS devices. With the new Palm Pre soon to be released in Australia and the Mojo SDK now freely available to everyone I decided to have a look at WebOS development. There is no need to have a Palm Pre device to begin development, I only had to download and install the SDK, Java and the VirtualBox to run the emulator. There is also a possibility of using Eclipse with the WebOS plugin but since I'm just starting and not doing anything complex, I'm happy to use Notepad++ as the IDE.

Installing the Palm® Mojo™ SDK on Windows

The Hello World page gives an understanding of the steps required to create, compile and install a WebOS application. The "SDLC" is simple and takes a few seconds - generate the application, add some functionality, package, install on the emulator, have a look, improve/fix functionality, package, install on the emulator, have a look ...

Hello, World!

There are also sample applications available and in the first few hours that I spent I learned a few things about how the applications function - scenes, assistants etc. - and also a few basic things about how the databases are used.

Samples

by . Also posted on my website

Wednesday, July 22, 2009

Mifare 1K Memory Structure

To complete the task I'm working on and read/write to smart cards, I had to understand the memory structure of the Mifare Standard 1K card. This was no easy task for a weak and small brain of mine! I figured it out finally and here is a very short summary of my understanding:

The total memory of 1024 bytes is divided into 16 sectors of 64 bytes, each of the sectors is divided into 4 blocks of 16 bytes. Blocks 0, 1 and 2 of each sector can store data and block 3 is used to store keys and access bits (the exception is the ‘Manufacturer Block’ which can not store data).

The data in any sector can be protected by either Key A or both Key A and Key B security keys. I do not need to use Key B, and in this case the bytes in the trailer can be used for data. If the sector is protected by the security key, this key has to be loaded before data can be accessed by using a special command.

Access bits define the way the data in the sector trailer and the data blocks can be accessed. Access bits are stored twice – inverted and non-inverted in the sector trailer as shown in the images.

Some examples:

Data stored in the sector trailer:
01 02 03 04 05 06 FF 07 80 69 11 12 13 14 15 16
01 02 03 04 05 06 – Key A
FF 07 80 69 – Access bits
11 12 13 14 15 16 – Key B (or data if Key B is not used)

Bytes 6, 7, 8 are access data
FF 07 80

Binary representation:
11111111 = FF
00000111 = 07
10000000 = 80

The bits that are bolded and underscored are the ones that define access to keys (C13, C23, C33 in the image above) and they form the 001 sequence. The bits that are bolded and not underscored are the same bits inverted. They form, as expected, the sequence 110.

From the table above I can see that 001 means that Key A can not be read, but can be written and Key B may be read. This is the "transport configuration" and was read from the card that was never used.

A bit more on Mifare 1K commands next time.

by . Also posted on my website

Sunday, July 19, 2009

Smart Cards Do Not Hurt Anymore

Ah, the great mystery of talking to the smart card is solved. The tool that helped me to do it is called CHIPDRIVE Smartcard Commander. It is a free tool and can easily be found, downloaded and installed.

When I positioned the card in the reader and started the Smartcard Commander, I could immediately see that it knows a lot of stuff about the card.

But what's more important, it has some sample scripts that can be loaded when I select "CPU Card" from the System tree and use the Load Script button. The sample script shows me how to construct commands that can be send to the card, I can also run them immediately and see the results.

I only need to send the proper commands now...

by . Also posted on my website

Friday, July 17, 2009

Smart Cards Hurt - 3

Resolved the problem with the Smart Card reader. After everything else failed, I tried installing the pritner and reader on the clean PC. Surprisingly, it worked. Then I tried uninstalling all drivers from my PC, restarting and reinstalling again. Unsurprisingly, it did not work (I tried doing this before).

Next thing, I decided to compare the driver versions between my PC and clean PC. And here it was - my driver said "SCM Microsystems 4.35.00.01" and the one on the clean PC said "SCM Microsystems 5.15". And, of course, the S331DICL.sys files had different dates. So, I copied the S331DICL.sys and installed the drivers again. That did not quite help though, the driver version was now the proper one, but the device version itself was not.

Why are the versions different? Only when I searched for S331DICL.sys on the whole computer I could figure out what was the most likely reason for my problem - looks like the old the driver version was installed by the 3M scanner installer. I found the old S331DICL.sys in one of the subfolders under its Program Files folder. Now, when I was installing the driver, it remembered the location and used the old file that came with the 3M scanner. So I uninstalled the 3M application, made sure that the S331DICL.sys file is deleted completely from my computer, copied over the new version and pointed to the new version of S331DICL.sys file when installing the smart card drivers. Now it finally worked.

Next thing is to actually implement communication to the smart card ...

by . Also posted on my website

Wednesday, July 15, 2009

Unit Testing - First Attempts

I had some time to use while I was investigating the smart card issues, so I decided to do the right thing. Something I have never done before. To learn how to write and use unit tests. Since I had a very small application that was testing the capabilities of the printer, it looked like a perfect guinea pig for my experiment. It turned out that writing tests is not so hard as I expected it to be. Well, I can not promise that I did it right, of course, because no one writes them here anyway and there is no one to mentor me or point to my mistakes.

So first of all I downloaded and installed NUnit framework

NUnit framework

Then I added a project of type class library to my solution and a single class called UnitTest to this solution. Here is the full code of the UnitTest class:

using System;
using NUnit.Framework;
using SmartCardTest;
using DataCardCP40;

[TestFixture]
public class UnitTest
{
public DataCardPrinter printer;
ICE_API.DOCINFO di;

[Test]
public void CreateObjects()
{
printer = new DataCardPrinter();
di = DataCardPrinter.InitializeDI();
printer.CreateHDC();
Assert.AreNotEqual(printer.Hdc, 0);
Assert.Greater(di.cbSize, 0);
}

[Test]
public void SetInteractiveMode()
{
int res = ICE_API.SetInteractiveMode(printer.Hdc, true);
Assert.Greater(res, 0);
}

[Test]
public void StartDoc()
{
int res = ICE_API.StartDoc(printer.Hdc, ref di);
Assert.Greater(res, 0);
}

[Test]
public void StartPage()
{
int res = ICE_API.StartPage(printer.Hdc);
Assert.Greater(res, 0);
}

[Test]
public void RotateCardSide()
{
int res = ICE_API.RotateCardSide(printer.Hdc, 1);
Assert.Greater(res, 0);
}

[Test]
public void FeedCard()
{
int res = ICE_API.FeedCard(printer.Hdc, ICE_API.ICE_SMARTCARD_FRONT + ICE_API.ICE_GRAPHICS_FRONT);
Assert.Greater(res, 0);
}

[Test]
public void SmartCardContinue()
{
int res = ICE_API.SmartCardContinue(printer.Hdc, ICE_API.ICE_SMART_CARD_GOOD);
Assert.Greater(res, 0);
}

[Test]
public void EndPage()
{
int res = ICE_API.EndPage(printer.Hdc);
Assert.Greater(res, 0);
}

[Test]
public void EndDoc()
{
int res = ICE_API.EndDoc(printer.Hdc);
Assert.Greater(res, 0);
}
}

There's not much to explain. First I create the objects required and verify that the device context was created and the DOCINFO struct was initialized. All the other tests just check the return codes of the printer functions. The error code is 0, so the check is for return value being greater than zero.

After compiling and fixing errors I realized that I have no way to set the sequence of the execution. Supposedly, as the theory teaches us, each test should be able to run alone and independent of whether the rest were passed, failed or run at all. Well, does not work so well in my case - if I want to test that the card can be ejected from the printer, I need to somehow insert it first! I found out, however, that the tests are executed in the alphabetic order of their names. Okay, that'll do for now. So I just renamed my tests like this A_CreateObjects(), B_SetInteractiveMode() etc. Then I compiled the solution, creating the "DataCardTest.dll". Next step is to run NUnit and open the dll. Wow! The smart thing can see all my tests now. When ready, just select Test->Run all from the menu and enjoy ...

It does not alway end that well, however - it might be like this (see how it tells what was the line where the error happened and how the expected test result was different from the actual).

What happened here? Took me some time to figure out ... the default printer was not set to my card printer.

by . Also posted on my website

Friday, July 10, 2009

Smart Cards Hurt - 2

Now, the slightly harder part is communicating with the Smart Card reader. Most, if not all, of the functionality resides within the winscard.dll. For functions reference, this MSDN page could be a start.

Smart Card Functions

I also found a nice example using google code search which resides here

ACR120Driver.cs

and using this code as a template, I used the following code to test the functionality of my SCM reader.

long retCode;
int hContext = 0;
int ReaderCount = 0;
int Protocol = 0;
int hCard = 0;
string defaultReader = null;
int SendLen, RecvLen;

byte[] SendBuff = new byte[262];
byte[] RecvBuff = new byte[262];

ModWinsCard.SCARD_IO_REQUEST ioRequest;

retCode = ModWinsCard.SCardEstablishContext(ModWinsCard.SCARD_SCOPE_USER, 0, 0, ref hContext);
if (retCode != ModWinsCard.SCARD_S_SUCCESS)
{
System.Diagnostics.Debug.WriteLine(ModWinsCard.GetScardErrMsg(retCode));
}

retCode = ModWinsCard.SCardListReaders(hContext, null, null, ref ReaderCount);

if (retCode != ModWinsCard.SCARD_S_SUCCESS)
{
System.Diagnostics.Debug.WriteLine(ModWinsCard.GetScardErrMsg(retCode));
}

byte[] retData = new byte[ReaderCount];
byte[] sReaderGroup = new byte[0];

//Get the list of reader present again but this time add sReaderGroup, retData as 2rd & 3rd parameter respectively.
retCode = ModWinsCard.SCardListReaders(hContext, sReaderGroup, retData, ref ReaderCount);

if (retCode != ModWinsCard.SCARD_S_SUCCESS)
{
System.Diagnostics.Debug.WriteLine(ModWinsCard.GetScardErrMsg(retCode));
}

//Convert retData(Hexadecimal) value to String
string readerStr = System.Text.ASCIIEncoding.ASCII.GetString(retData);
string[] rList = readerStr.Split('\0');

foreach (string readerName in rList)
{
if (readerName != null && readerName.Length > 1)
{
defaultReader = readerName;
break;
}
}

if (defaultReader != null)
{
retCode = ModWinsCard.SCardConnect(hContext, defaultReader, ModWinsCard.SCARD_SHARE_DIRECT,
ModWinsCard.SCARD_PROTOCOL_UNDEFINED, ref hCard, ref Protocol);
//Check if it connects successfully
if (retCode != ModWinsCard.SCARD_S_SUCCESS)
{
string error = ModWinsCard.GetScardErrMsg(retCode);
}
else
{
int pcchReaderLen = 256;
int state = 0;
byte atr = 0;
int atrLen = 255;

//get card status
retCode = ModWinsCard.SCardStatus(hCard, defaultReader, ref pcchReaderLen, ref state, ref Protocol, ref atr, ref atrLen);

if (retCode != ModWinsCard.SCARD_S_SUCCESS)
{
return;
}

//read/write data etc.

.....
}
}

ModWinsCard.cs is, again, a wrapper for the winscard.dll functions, data structures, and declares all required constants.

Anyway, this code actually worked fine, except one little detail - the state variable that gets returned by the SCardStatus returned the value of 2. And the possible values are explained here:

SCardStatus

"2" is SCARD_PRESENT, which means "A card is present in the card reader, but it is not in position for use". A better result would be something like SCARD_NEGOTIABLE which is "The card has been reset and is waiting for protocol negotiation".

Also, using SCardConnect with preferred protocol set to T0 or T1 returned SCARD_W_UNRESPONSIVE_CARD error.

Now this is the point where I had to consult with the printer manufacturer because there's a number of possible reasons for the errors - hardware, firmware, drivers or incompatible card. Work still in progress.

by . Also posted on my website

Tuesday, July 7, 2009

Smart Cards Hurt - 1

So here's the new toy I've got to play with - the DataCard CP40 Plus card printer with the SCM SCR331-DI Smart Card reader.

Datacard CP40 Plus

Developing the application for the printer consists mostly of two parts - communicating with the printer and communicating with the smart card reader. You tell the printer to pick up the card, you tell the printer to position the card for smart card processing, you tell the smart card reader to write data to the smart card, you tell the printer to encode the magstripe and print something on the card, you tell the printer to finish with the print job.

It does not look so easy when you read the manual. This is how the programming flow looks like:

In reality, though, the whole printer communication is mostly done by the following code:

printer.Hdc = PrintDoc.PrinterSettings.CreateMeasurementGraphics().GetHdc().ToInt32();

/* Set Interactive mode */
if (ICE_API.SetInteractiveMode(printer.Hdc, true) > 0)
{
ICE_API.DOCINFO di = new ICE_API.DOCINFO();
/* Initialize DOCINFO */
di.cbSize = 16;
di.lpszDocName = "Card Printer SDK Test";
di.lpszDataType = string.Empty;
di.lpszOutput = string.Empty;

/* Start document and page */
if (ICE_API.StartDoc(printer.Hdc, ref di) > 0)
{
if (ICE_API.StartPage(printer.Hdc) > 0)
{
/* Set card rotation on */
ICE_API.RotateCardSide(printer.Hdc, 1);
/* Feed the card into the smart card reader */
if (ICE_API.FeedCard(printer.Hdc, ICE_API.ICE_SMARTCARD_FRONT + ICE_API.ICE_GRAPHICS_FRONT) > 0)
{
/* Put any SmartCard processing/communication here */
TalkToSmartCard();
}
/* Remove the card from the reader and continue printing */
ICE_API.SmartCardContinue(printer.Hdc, ICE_API.ICE_SMART_CARD_GOOD);
/* End the page */
ICE_API.EndPage(printer.Hdc);
}
/* End job */
ICE_API.EndDoc(printer.Hdc);
}
}

The ICE_API mostly contains wrappings for the functions from the ICE_API.dll which comes with the printer and defines some constants and data structures, like this

[StructLayout(LayoutKind.Sequential)]
public struct DOCINFO
{
public int cbSize;
public string lpszDocName;
public string lpszOutput;
public string lpszDataType;
}

........

[DllImport("ICE_API.dll")]
public static extern int FeedCard(int hdc, int dwCardData);

[DllImport("ICE_API.dll")]
public static extern int GetCardId(int hdc, CARDIDTYPE pCardId);

[DllImport("ICE_API.dll")]
public static extern int SmartCardContinue(int hdc, int dwCommand);

.........

public const int ICE_SMARTCARD_FRONT = 0x10;
public const int ICE_GRAPHICS_FRONT = 0x1;

public const int ICE_SMART_CARD_GOOD = 0;
public const int ICE_SMART_CARD_ABORT = 1;

Now that was the easy part.

by . Also posted on my website

Saturday, June 27, 2009

Embedded Technology Workshop

Some members of our team, including myself, have attended a small, half-day workshop on Microsoft Embedded Technologies. Here's how the agenda looked like:










TIMETOPIC
12:30Registration and light lunch
13:00 – 13:05Welcome speech
13:10 – 13:30Introduction: Why use Embedded? What are the benefits?
13:30 to 15:00Module 1: Windows Embedded Standard – Development Suite, Tools and Utilities.

Module 2: Embedded Enabling Features.
15:00Tea/Coffee Break
14:30 to 16:00Module 3: Demo:
- Building an image using File Based Write Filter
Module 4: Componentization of 3rd Party Drivers.

Module 5: Demo:
- Creating Custom Components in your image.
16:00 - 16:30Q & A
16:30Closing and thank you

It was held at the local Microsoft office (not Microsoft Office, but the actual place where like, people work). The office was pretty boring by the way - no huge Bill Gates portaits, no sacrifices etc ... maybe they clean up when they know strangers will be present.

Anyway, the topic was mostly about how to assemble your own embedded OS from parts of dismembered Windows XP or Windows Embedded Standard etc. Basically, if I know exactly what peripherial devices will my hardware use, I can only include drivers for these devices, hugely reducing the size of the OS. Also, I may choose to cut out other elements of the OS - I may get rid of the whole explorer shell altogether. They mentioned that the smallest OS they have actually seen used by one of the clients was about 8MB in size. Quite impressive compared to the standard XP footprint of about 1.9GB.

As they said, the goal of the workshop was to show the participants that the process of assembling your own OS is not as complicated as people usually think. Can't say they succeeded - looked fairly complex to me so far ...

P.S. I have no idea why blogger inserts so many empty lines before the table ... will try to fix it later.

by . Also posted on my website

Tuesday, June 23, 2009

VSS => TFS converter application update

After a bit of thought, I decided what would be the easiest and the most convinient way to run my small application that helps to convert projects from VSS to TFS.

I will start the command line tool passing them together with parameters to cmd.exe application, and run cmd.exe with the -k parameter to prevent the command window from closing after the tool exits. I will keep the ID of the process that runs cmd.exe. Next time I run the cmd.exe, I will check if there is ID present, and if yes, I will kill the process, and then start a new one. This way the user's computer will not be littered with command windows.

So, the small class that would take care of process management looks like this

public class ProcessFactory
{
private static int _currentProcessID = -1;

private static Process _cmdProcess;

private static ProcessStartInfo _startInfo;

public static ProcessStartInfo StartInfo
{
get
{
if (_startInfo == null)
{
_startInfo = new ProcessStartInfo();
}
return _startInfo;
}
}

public static void RunProcess(string filename, string args, string workingdir)
{
try
{
if (_currentProcessID > 0)
{
Process processToClose = Process.GetProcessById(_currentProcessID);
if (processToClose != null)
{
processToClose.Kill();
}
_currentProcessID = -1;
}

StartInfo.FileName = filename;
StartInfo.Arguments = args;
StartInfo.WorkingDirectory = workingdir;

_cmdProcess = Process.Start(StartInfo);

if (_cmdProcess != null)
{
_currentProcessID = _cmdProcess.Id;
}
}
catch (Exception ex)
{
Logger.LogInfo(ex);
}
}
}

And then I just call the RunProcess as many times as I want, but the user will not be bothered with "leftover" command windows

string args = "/k ssarc.exe -d- -i -y" + SettingsManager.GetSetting(Constants.VSSLOGIN)
+ "," +
SettingsManager.GetSetting(Constants.VSSPASSWORD) + " -s" +
SettingsManager.GetSetting(Constants.VSSDBFOLDER) + " " +
SettingsManager.GetSetting(Constants.VSSARCHIVEFILENAME)
+ ".ssa" + " \"" + SettingsManager.GetSetting(Constants.VSSPROJECTNAME) + "\"";

ProcessFactory.RunProcess("cmd.exe", args, SettingsManager.GetSetting
(Constants.VSSINSTFOLDER));

.....

args = "/k ssrestor.exe \"-p" + SettingsManager.GetSetting(Constants.VSSPROJECTNAME)
+ "\"" + " -s" + SettingsManager.GetSetting(Constants.VSSARCHIVEFOLDER) +
" -y" + SettingsManager.GetSetting(Constants.VSSLOGIN) + "," +
SettingsManager.GetSetting(Constants.VSSPASSWORD) + " " +
SettingsManager.GetSetting(Constants.VSSARCHIVEFILENAME) + ".ssa" +
" \"" + SettingsManager.GetSetting(Constants.VSSPROJECTNAME) + "\"";

ProcessFactory.RunProcess("cmd.exe", args, SettingsManager.GetSetting
(Constants.VSSINSTFOLDER));

etc., until finished.

by . Also posted on my website

Tuesday, June 16, 2009

Using the Process class.

I was not too loaded with work recently so I decided to write a small application that would help to automate the process of converting existing Visual SourceSafe projects to Team Foundation Server. The idea is to get some information from the user first, and then spare him from some manual tasks - running tools like ssarc, ssrestor or VSSConverter, manually creating and editing XML files etc.

When the application starts, the user needs to provide (or just check) the following information:

  • A folder where Visual SourceSafe is installed
  • A folder where Visual SourceSafe database is located
  • Visual SourceSafe database administrator login credentials
  • The name of the Visual SourceSafe project to be converted
  • A folder that will be used during conversion to restore VSS database, keep XML files etc.
  • SQL Server that will be used by the converter
  • A name of the TFS and the port number
  • A name of the project on the TFS where the converted files will go

A significant chunk of the application functionality is just wrapping the calls to command line tools so that the user does not have to bother with manually locating them, typing the correct parameters etc.

For that purpose, the .NET class Process is quite handy.
Here is the example:
To archive the VSS project MyProject which is in the VSS database located on MyServer into the archive file called MyArchive.ssa I need to run the following from the command line:

>"C:\Program Files\Microsoft Visual SourceSafe\ssarc.exe" "-d- -i -yadmin,password -s\\MyServer\ MyArchive.ssa \$/MyProject\"

To run this command from the C# code I can use the following code:

ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = "ssarc.exe";
startInfo.Arguments = @"-d- -i -yadmin,password -s\\MyServer\ MyArchive.ssa \$/MyProject\";
startInfo.WorkingDirectory = @"C:\Program Files\Microsoft Visual SourceSafe";
Process process = Process.Start(startInfo);

This is quite self-explanatory.

There are a couple of things that I had trouble with however. First thing is logging. It would be nice to log the errors and messages that the process generates. This is possible, according to the MSDN article.

ProcessStartInfo Class

Standard input is usually the keyboard, and standard output and standard error are usually the monitor screen. However, you can use the RedirectStandardInput, RedirectStandardOutput, and RedirectStandardError properties to cause the process to get input from or return output to a file or other device. If you use the StandardInput, StandardOutput, or StandardError properties on the Process component, you must first set the corresponding value on the ProcessStartInfo property. Otherwise, the system throws an exception when you read or write to the stream.

However, if I redirect standard output to the text file, for example, the user is unable to see it. And some of the tools used required interaction with the user. So it looks like I either interact with the user, or log the messages somewhere.

Also, when the process completes, it closes the window where it was running. So, if there is a message shown by the process when it exits, the user does not have time to read it. It might be frustrating when the process exits with an error message and the user does not know what exactly the error was. And it can not be logged because the output can not be redirected somewhere - the user needs to see it on the screen.

I will still be looking for the 'elegant' solution for this, but so far I found a workaround: rather than starting the process itself, I can start the command line using the "cmd.exe" and pass the whole tool together with the parameters as a parameter to cmd.exe.

CMD

The trick is that specifying the /k parameter prevents the command window from closing after the process exits. Here is how the previous code will look like when changed according to my workaround:

ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = "cmd.exe";
startInfo.Arguments = @"/k "C:\Program Files\Microsoft Visual SourceSafe\ssarc.exe" "-d- -i -yadmin,password -s\\MyServer\ MyArchive.ssa \$/MyProject\"";
Process process = Process.Start(startInfo);

I will be looking for a better solution when I have time to improve this application.

by . Also posted on my website

Tuesday, June 9, 2009

Small Things Refreshed Today

I had to write a small Windows Forms application today. It just gets some user input, creates an XML file, sends it to the webservice, gets the response, parces it and shows the results to the user. Good thing is that I had to remind myself how to use two simple things.

1. Saving and retrieving values using the app.config file.

If I want to get some values from the app.config file, I can keep them in the appSetting section and the whole app.config file for the small application can be as simple as that








To read the values I need to do just the following (after I add a reference to System.configuration to the project):

string myFirstValue = ConfigurationManager.AppSettings.Get("MYKEY1");

To update the values I need to put a little bit more effort

Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
AppSettingsSection appSettings = config.AppSettings;

appSettings.Settings["MYKEY1"].Value = myNewValue1;
appSettings.Settings["MYKEY2"].Value = myNewValue2;

config.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("appSettings");

It is useful to know that this would not work at debug time, though - it will not throw an exception, but the values would not be updated too. I spent a few minutes trying to find out why it does not work before I understood that this behaviour is expected.

2. Creating the XML document.

Of course, for the purposes of my application, where the whole XML is maybe 10 to 15 elements, I could go with the following:

string myXML = "
";
myXML += "" + someID + "";
...
myXML += "";
return myXML;

The code would actually be shorter than the "proper" XML handling, take less time to write and maybe even will work faster (especially if I use a StringBuilder to concatenate strings). I did it the "proper" way, however - for practice.

To create a document

XmlDocument xmlDoc = new XmlDocument();

To create a declaration

XmlDeclaration xDec = xmlDoc.CreateXmlDeclaration("1.0", "UTF-8", null);

To create an element in a format of

myValue1
I created a small helper function

private XmlElement CreateElementFromNameValue(string name, string value)
{
XmlElement element = xmlDoc.CreateElement(name);
element.InnerText = value;
return element;
}

To create an attribute to the element

XmlElement xmlHeader = xmlDoc.CreateElement("header");
XmlAttribute schema = xmlDoc.CreateAttribute("SchemaVersion");
schema.Value = "2.0";
xmlHeader.SetAttributeNode(schema);

To bring it all together

XmlDocument xmlDoc = new XmlDocument();
XmlDeclaration xDec = xmlDoc.CreateXmlDeclaration("1.0", "UTF-8", null);

XmlElement request = xmlDoc.CreateElement("request");
XmlAttribute schema = xmlDoc.CreateAttribute("SchemaVersion");
schema.Value = "2.0";
request.SetAttributeNode(schema);

request.AppendChild(CreateElementFromNameValue("MYKEY1", "myValue1"));
request.AppendChild(CreateElementFromNameValue("MYKEY2", "myValue2"));

xmlDoc.AppendChild(xDec);
xmlDoc.AppendChild(request);

Expected InnerXml of the xmlDoc



myValue1
myValue2
by . Also posted on my website

Monday, May 25, 2009

VSS => TFS Migration

Now that I have the TFS installed to play with, my next task is to come up with the process to transfer existing projects from Visual Source Save. Since the current VSS database is fairly huge, and we do not want to transfer the whole thing at once, I came up with the following process:


  • Select project(s) to be transferred from VSS database

  • Back up project(s) and restore them to the new VSS database

  • Fix the issues in the new VSS database with the Analyze tool

  • Run the VSSConverter tool in analyse mode

  • Get the TFS ready for migration

  • Prepare the migration settings file

  • Run the VSSConverter tool in migration mode

  • Verify the results of the migration

This may look rather lengthy and complex, but it makes sure that the current VSS database remains untouched, which is quite important for obvious reasons.

Here is how I migrated a small project from VSS to TFS in a few more detail:

Select project(s) to be transferred from VSS database

Let's say we want to transfer MySmallProject which is located in $/MyMiscProjects/MySmallProject in a large VSS database.

Back up project(s) and restore them to the new VSS database

Microsoft has two utilities for backing up and restoring VSS projects, SSARC and SSRESTOR. Their parameters are described in detail here:

SSARCSSRESTOR

They usually can be found in the SourceSafe folder (i.e. C:\Program Files\Microsoft Visual SourceSafe)

First, I create a new VSS database (VSSTransfer) where I'm the admin. Next, I need to have admin rights in the initial VSS database and, of course, to know where it is located. Then I can run the SSARC command like that:

ssarc -d- -i -yadmin,password -s\\PathToVSSDB\MyHugeVSSDB CodeProject.ssa "$/MyMiscProjects/MySmallProject"

This backs up the MySmallProject with default parameters, without deleting files from the old database "MyHugeVSSDB", into the CodeProject.ssa archive file.

Next, I restore the project into the new empty database I created.

ssrestor "-p$/MySmallProject" -sC:\VSSTransfer -yadmin,password CodeProject.ssa "$/MyMiscProjects/MySmallProject"

Fix the issues in the new VSS database with the Analyze tool

This is just running the Analyze tool with the -F parameter to fix possible issues in the VSS.

Run the VSSConverter tool in analyse mode

The VSSConverter is a Microsoft tool that comes with the TFS and allows migrating data from VSS database into the TFS database. More info here:

VSSConverter Command-Line Utility for Source Control Migration

To run the VSSConverter, a settings file has to be prepared first. Here is the sample:














(if we need to transfer multiple projects, there can be multiple 'Project' elements under 'ProjectMap')

Now I save the file as settings.xml and run the VSSConverter tool (which is located in drive:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE ):

VSSConverter analyze settings.xml

(An important note, though - the VSSConverter should be the one that comes with TFS SP1. I tried to use the tool from the original TFS install and got troubles with history - it was not migrated at all).

Two files will be created, Analysis.xml and UserMap.xml

Get the TFS ready for migration

First of all, create the target project, i.e. MyTFSSmallProject. Then, look at the UserMap.xml. It lists all VSS users who performed action in the database. It looks like this:


xmlns:xsd="http://www.w3.org/2001/XMLSchema">




To map users properly, we need to add them to TFS. If the user no longer exists, he can be mapped to any user - TFS admin or his team leader, for example. So the UserMap.xml will end up looking like this:


xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">




Prepare the migration settings file

Modify the settings.xml file to specify the SQL Server that is going to be used for the migration process and Team Foundation Server, the users map file and the destination project on the TFS as follows and save it as migration_settings.xml. The SQL server does not have to be the one where TFS database are located, and the user performing the migration need to have CREATE DATABASE permission in the SQL Server.


















Run the VSSConverter tool in migration mode

Run the VSSConverter tool in migration mode as follows:

VSSConverter Migrate migration_setting.xml

A report file called VSSMigrationReport.xml will be created if the migration process runs successfully. A log file called VSSConverter.log will contain information messages about the migration process.

Verify the results of the migration

Log in to TFS, go to the project's Source Control Explorer, check the files, history etc. Get the latest version, build it. Have fun.

Friday, May 15, 2009

TFS Disaster Resolved

Okay, so today the TFS was finally installed. Unfortunately, I cannot tell for sure what the exact thing that fixed it was, because we changed more than one thing. Firstly, the reporting services were uninstalled completely from the data tier. Secondly, we found some information on slipstreaming the SP1 for TFS 2008 and applied it to the installation package.

Creating a TFS 2008 with SP1 Slipstreamed ISO image

And lastly, we ran the installation from the beginning ... again. I personally think that removing the reporting services from the data tier did it. We'll need to have reporting services on the data tier later, we'll see if installing them back will break TFS or not. But for now, this weight is off my shoulders.

by . Also posted on my website

Thursday, May 14, 2009

TFS/IIS Disaster Update

I got a response from the Microsoft support person, who have told me earlier that he was able to reporoduce our error.

So, according to what he told me, he looked up some internal documentation and found out that the particular configuration that we are trying to use (Windows Server 2003 on application tier, Windows Server 2008 on data tier) may not have been properly tested. So far, the recommendation is to install reporting services on the application tier. (Later, he said, we will be able to move them to the data tier).

There were a few issues because of the Sharepoint that was not completely removed from the application tier before we started the reinstallation of the TFS, but the most interesting was the "Error 29000.The Team Foundation databases could not be installed. For more information, see the Microsoft Windows Installer (MSI) log."

That was a bit tricky, because both admin and service account for the TFS had all possible permissions on the data tier. Some log file reading and some searching, and I found out that this is the issue with analysis services permissions.

Problem in doing TFS2008 SP1 upgrade

The account should be a member of the “Server Role is used to grant server-wide security privileges to a user” under analysis services->properties->security option should be the TFS Service account. (There is no need to add the TFS Setup account or the TFS Report account here).

Now, the question was ... what error would come up next?

Next one was the Error 28805.The setup program cannot complete the request to the server that is running SQL Server Reporting Services. Verify that SQL Server Reporting Services is installed and running on the Team Foundation app tier and that you have sufficient permissions to access it. For more information, see the setup log.

OK, that was our mistake. The reporting services were removed from the data tier and installed on the app tier, but the databases were never created. Even the SQL Server was not installed yet on the application tier.

With that fixed, we moved forward just to finally hit the wall.

"Error 29112.Team Foundation Report Server Configuration: Either SQL Reporting Services is not properly configured, or the Reporting Services Web site could not be reached. Use the Reporting Services Configuration tool to confirm that SQL Reporting Services is configured properly and that the Reporting Service Web site can be reached, and then run the installation again. For more information, see the Team Foundation Installation Guide."

And what happens here remains a mystery for me so far.

This is what I see in the installation log:

"Setting database connection...

Verifying the configuration of SQL Server Reporting Services...
SQL Server Reporting Services status Name: ReportServerVirtualDirectory Status: Set Severity: 1 Description: A virtual directory is specified for this instance of report server.
SQL Server Reporting Services status Name: ReportManagerVirtualDirectory Status: Set Severity: 1 Description: A virtual directory is specified for this instance of report manager.
SQL Server Reporting Services status Name: WindowsServiceIdentityStatus Status: Set Severity: 1 Description: A Windows service identity is specified for this instance of report server.
SQL Server Reporting Services status Name: WebServiceIdentityStatus Status: Set Severity: 1 Description: A web service identity is specified for this instance of report server.
SQL Server Reporting Services status Name: DatabaseConnection Status: Set Severity: 1 Description: A report server database is specified for this report server.
SQL Server Reporting Services status Name: EmailConfiguration Status: NotSet Severity: 2 Description: E-mail delivery settings are not specified for the report server. E-mail delivery is disabled until these settings are specified.
SQL Server Reporting Services status Name: ReportManagerIdentityStatus Status: Set Severity: 1 Description: A report manager identity must be specified.
SQL Server Reporting Services status Name: SharePointIntegratedStatus Status: NotSet Severity: 2 Description: The report server instance supports SharePoint integration, but it currently runs in native mode. If you want to integrate this report server with a SharePoint product or technology, open the Database Setup page and create or select report server database that can be used with a SharePoint Web application.
SQL Server Reporting Services status Name: IsInitialized Status: OutOfSync Severity: 3 Description: The report server is not initialized.

Verifying SQL Server Reporting Services configuration status failed.

Error: ErrorCheckingReportServerStatus.

Configuring SQL Server Reporting Services failed."

But why? I have no idea. Here is what happens:

I open Reporting Service Configuration Manager, go to "Database Setup" and notice that the "Server Name" is pointing to the data tier, and the "Initialization" shows a grayed cross against it. So I point it to correct server, Press "Connect", "OK", "Apply" and enjoy a lot of green ticks in the "Task Status" and a grayed cross being changed to grayed tick.

Voila! Reporting Services configured properly and initialized. I can go to http://localhost/reports and see the proper "SQL Reporting Services Home Page".

So I switch back to my error and press "Retry". The installer thinks for a while, but then the error is displayed again.

So I start the Reporting Service Configuration Manager again, go to "Database Setup" and notice that the "Server Name" is pointing to the data tier, and the "Initialization" shows a grayed cross against it!

Why does the installed does it to me? I have no idea so far and could not find any good information.

by . Also posted on my website

Small Thing Learned Today

(No, I didn't forget my blog. It's just that not much exciting have been happening to me in regards to development)

Visual Studio 2008 does not show Solution view when there is only one project in the Solution by default. I personally find this feature very annoying. Today I was going through one example from a book, and it involved creating a solution and adding a couple different projects to it. So I created the solution, added a new project to it, did whatever was required and came to the step where I had to add a second project to the solution. And here I am, completely puzzled - not only I don't see the solution in the Solution Explorer, but there is no menu which would 'intuitively' point me to the way to add a second project.

The fix was easy to find, but still - what's the point of having 'solutions' if you're hiding them from the users by default?

Visual Studio 2008 does not show Solution view when there is only one project in the Solution (by default)

The solution is to go to Tools>Options>Projects and Solutions and check "Always Show Solution" (which is unchecked by default).

by . Also posted on my website

Thursday, April 23, 2009

IIS Disaster Update

Microsoft has been able to reproduce our issue on their testing machines. I guess that places the ball into their court now. Makes me feel a little bit less dumb, I was quite sure that we're missing some important security configuration setting or anything like that. Also, for the company, I guess, that means that we do not have to pay for support hours spent on the issue by Microsoft. Let's see what they will come up with ... by . Also posted on my website

Tuesday, April 21, 2009

IIS Disaster Update

I got a response from Microsoft, which is actually more of an information request. They wanted to know if I can connect to the IIS on the data tier using the 'Connect As' checkbox on the 'Connect to Computer' dialog, like this:

Apparently, I can not. This did not come as a surprise. However, I decided to do an experiment and use the service account credentials in the 'Connect As' dialog box. Strangely enough, that worked. Very strange - both account are administrators on both machines, but only one of them can connect to IIS on data tier remotely. I started looking for a possible reason and noticed that the service account was a member of the IIS_WPG on the app tier, and the TFS admin account was not. Aha! So, I added the admin account to the group.

Now, the really strange thing is happening. I logon to the app tier as the TFS admin account, start IIS Manager, right-click 'Internted Information Services' and click 'Connect'. From here, I try 2 different approaches:

1. Connect without providing credentials. Which, I assume, is connecting as a current user - the TFS admin user.

and this is what I get for my efforts.

2. Connect specifying the credentials explicitly. Which are, of course, the credentials of the TFS admin user.

and voila

Suddenly I have all the access I need. Unfortunately, that does not help much because the TFS installation still fails - I assume it tries to login to the data tier using the first approach.

Which obviously means ... which means ... ugh, I have no idea what that means. I do not have enough knowledge on the subject. Somehow the remote (data tier) IIS treats these logins differently, even though it is the same domain account that tries to login. Something should be configured in a different way somewhere. I tried to play with authentication settings on both servers, but did not succeed yet. I forwarded my new findings to Microsoft support. Stay tuned ...

by . Also posted on my website