Archive for September, 2003

Rotating text

I’ve enjoyed reading Raymon Chen’s series of articles about different historical aspects of Microsoft Windows. Today, he writes about why you can’t rotate text.

Leave a comment

Curves

I’ve been following a loosely coupled weblog discussion about using curves when grading, doing performance reviews, etc.

It’s being discussed by Paul Vick here. Chris Anderson talks about it here and here. It all started from John Porcaro’s entry here.

Here’s my thoughts on this.

I think people are discussing different issues without realizing it. One issue is the idea of ranking people and using a curve to do the ranking. The other issue is the automatic application of the ranking to consequences, such as failing a class or being fired.

Clearly, John Porcaro is arguing against the automatic application of the ranking. I agree that automatically firing people simply because they rank the lowest is a bad policy. In my opinion, the key problem with this is that you have applied the curve to a unrepresentative sample.

Let me use a hypothetical example of what I mean by an unrepresentative sample. A sales manager has a sales force of 100 people. A performance curve is applied using various parameters. Using a curve, we assign the label “Good ” to the top 10%, we assign the label “Acceptable” to the middle 80% percent, and we assign the label “Poor” to the bottom 10%.

To automatically fire the bottom 10% and give bonuses to the top 10% is foolish, in my opinion. The sample only ranks the salespeople within the company. If you were to rank both your salespeople and all potential replacements, then I think you have a sample that is fully representative. If some your current salespeople fall in the bottom 10%, you fire them. Replacement salespeople who fall in the top 90% are hired. If all the replacement candidates fall in the bottom 10%, you don’t fire anybody.

Practically, it is very difficult to rank the replacement candiates along with the existing salespeople. It’s hard to compare apples to apples in this scenario. The ranking process is subjective. This subjectivity is why automatic application of rankings is bad.

I also agree with the argument that grade inflation is a problem. It is a problem that exists in our schools and even exists in some companies. I could argue that the problem of ranking inflation in a company solves itself when the company goes out of business, but I’ll leave that argument for another time.

Grade inflation is a problem, but automatic application of a curve is not the solution. The main problem with an automatic curve is that it creates a variable standard for passing. Take two classes learning the same material and give them the same tests. One class is an honors class. The bottom student in that class demonstrated a knowledge of 85% of the material. That student fails. The other class is regular students, the top student in that class demonstrated a knowledge of 85% of the material. That student passes, as well a bunch of students who understood less than 85% of the material. The “bar” in each class was set at a different level.

So what do we do? We set a standard of measuring achievement, and we stick to it. If it means that a class has 90% of the students getting an A, that is fine. If it means that 90% of students in a class fail, that is fine, too. Where we fail is in adjusting the bar down when it is not warranted. Another place we fail is not raising the bar when it is warranted.

Grade inflation is not a failure of those being ranked/measured. It is a failure of those doing the ranking. Applying curves is a popular solution because it is easy to do and it removes responsibility from those who have to do the ranking. You don’t like that you were fired? It’s the policy of the curve.

I could probably write a couple chapters in a book on this topic. I’ll stop here…

Leave a comment

VSS and SQL Server: Reviews

I’ve been busy the past two weeks really digging into demos of products that will help us integrate our SQL development with SourceSafe. I’ve also been looking at bug tracking products, since our current bug tracking is done through Exchange Server using a set of public folders, attached Word documents within the public folders, and categories to control our priorities. I will write more on the bug tracking in a future post.

When I started the review process, I was concerned mainly with these things:

  • Recording changes in SourceSafe
  • Automated lookup of column names
  • Updating other databases with the changes

I spent a lot of time reviewing our current process for database udpates. I thought about how the database update process could be automated. Table changes make the update process really difficult. In the end, I put the idea of automating the database updates on the back burner. However, the issue still affected my decisions in the review process.

Recording changes in SourceSafe is pretty straightforward. Lots of companies have a manual process for editing stored procedures. The procedure is generally like this:

  1. Check out the stored procedure to a file
  2. Edit the checked out file (Query Analyzer is a popular tool for this)
  3. Apply the changes to the database
  4. Save the changes to the file
  5. Check in the file

I like the idea of an integrated tool. Visual Basic has a simple integration to SourceSafe that makes things very easy. Why can’t we have the same thing with SQL development? I looked at two products in depth, SQL Source Control 2003 and mssqlXpress. These two products were the only ones I could find in a price range that is reasonable for small development teams. Both products are relatively immature, as they have been available for less than a year.

SQL Source Control Review

SQL Source Control is developed in Poland. It has two features that I would really like – custom documentation of objects and automated deployment (updates).

I really like the idea of documenting tables and columns. We have that kind of documentation, but it is not integrated into the development environment. SQL Server has metadata, but it is limited to 255 characters. A lot of our documentation, which is stored in Word documents, is usage documentation that describes the relationship of a column with other columns in the same table and in other tables. This often takes more than 255 characters.

SQL Source Control allows documentation of everything and it is easily accessed within the development environment. This feature had me leaning to SQL Source Control before I dove into the depths of the product. However, other problems in the product soon overrode the benefit of this feature.

The biggest problem for me is that Intellisense only works when you qualify every column in your query with the full table name. Aliases do not bring up Intellisense. I never use full table names in my queries. In fact, we’ve developed a standard set of aliases that we use with our tables. If we used this product, we wouldn’t have Intellisense, which is a major feature.

The perceived problem is performance. The demo only allows you to work on the first five objects. Even with only having 5 views, 5 stored procedures, 5 tables, etc., the application was slow to go through a “synchronization” process. I shudder to imagine how long the synchronization process will take with our 1300 stored procedures, 400 tables, and 1700 views. This may not really be a problem, but I wouldn’t be able to find out without spending money.

The automated deployment idea sounded great, but I didn’t trying it in depth because of the Intellisense issue.

The documentation for SQL Source Control is pretty good.

mssqlXpress Review

mssqlXpress is developed in Australia. It doesn’t have the documentation feature. It doesn’t have the the automated deployment feature, although their product comparison page promises it “Soon!”

mssqlXpress’s Intellisense, which they call “Code Complete”, works on table aliases. The layout of the user interface is a little funky at first, but I found that I quickly got used to it.

The F5 key serves two purposes, which I think is a no-no in UI design. If the focus is in the database object tree, it refreshes the status of the tree. If the focus is in the code window, it executes the SQL code (just like query analyzer). I got bit by this because I wanted to refresh the object tree, but I hit F5 with the focus in the code window. A change was made that I wasn’t ready to commit to the database. I was easily able to roll it back, but I wouldn’t have had this problem if the F5 key only served one purpose.

mssqlXpress keeps track of table changes by keeping both the create script of the table and a change script in SourceSafe. The change script can be checked out and edited. Each change is stored as a separate file name. This design makes a lot of sense to me. I can envision automating table updates and data conversions by figuring out which version of the table you currently have, then sequentially executing each change script until you get to the desired version.

mssqlXpress also maintains a public forum for their products. I found this forum invaluable in learning the product. Unfortunately, I was driven to the forum because the HTML documentation for the product is pretty poor. A number of features and options in the product weren’t documented at all.

I can sympathize with the documentation problems. At work, we have a great documentation team, but our product is changing so rapidly that the documentation team can’t keep up with the changes. I’m giving the company the benefit of the doubt and presuming they are having the same problems.

In my opinion, this documentation problem is a symptom of a positive aspect about the product. The programmers are actively working on the product and rapidly creating new functionality (as well as new bugs!). In the two weeks that I was working on the product, I thought that an “auto indent” feature would be very helpful. I also ran across a problem that was due to our development SQL server using a binary sort order. I reported this problem to the developers.

This morning, I received a beta announcement. This beta fixes the problem with a binary sort order. It also adds an auto indent feature. I haven’t tried this new version yet, but the rapid response is a sign of a product and company with which I’d want to develop a relationship.

Conclusion

Given all the factors I’ve described here, I recommended to our management team that we invest in mssqlXpress. It will achieve our primary objectives, and seems to be on a fast track to being a great product. All comments are welcome.

3 Comments

RIAA Redux

Other people are also expressing their opinions about music file sharing and the RIAA. This entry makes sense to me.

Leave a comment

RIAA thoughts

I ran across this link from Robert Scoble. I printed out a copy of this editorial cartoon and it is making the rounds at work.

I must admit that I’ve ignored the whole music-sharing issue. I don’t share my music online. I download MP3s, but I only download files that are publicly available on web sites. My interest is in wind band music, so I have gathered a collection from the military groups, universities, colleges, and some high schools. Some publishers even make MP3s of their music available. I’ve never thought that I had to worry about this issue. I think I’m going to start worrying.

Historically, changes in the interpretation of law are made in incremental changes. For years, laws existed that made abortion mostly illegal. In January of this year, we recognized the 30th anniversary of the Roe v. Wade decision, which removed most of the restrictions that had existed. I think it’s important to realize, especially for those who weren’t adults in 1973, that this decision didn’t just come out of the blue.

Opponents of the abortion restrictions tried to get the restrictions repealed starting in the 1960s. At first, the courts ruled against removing the restrictions. But certain decisions, not directly related to the abortion restrictions, set precedents that were eventually used in the Roe v. Wade decision. The Supreme Court ruling in Griswold v. Connecticut (1965) established privacy to family matters. After seven more years, Eisenstadt v. Baird (1972) established personal privacy. These cases were used as the basis for the Roe v. Wade (1973) decision. It took 8 years, but the change occurred.

We are seeing the same incremental changes happening with respect to copyright laws and music. Combined with the changes due to the Patriot Act, instead of establishing our privacy, these changes are eroding our privacy. We could reach a point where we can’t legally enjoy music except by listening on our headphones, and then only if the volume is set to its lowest level.

I worry that the recording industry is going to ruin the music industry, which encompasses a lot more than CDs. When musical groups visit nursing homes on a Sunday afternoon, will we have to charge admission in order to pay royalties to the copyright holders? If I play “One Hand, One Heart” at a wedding, does somebody have to mail a check to the RIAA? That’s the direction things seem to be heading.

Leave a comment

VSS And SQL Server: Part 2

Nik Shenoy makes the following comment regarding my previous post about Visual SourceSafe And SQL Server:

My thought here is that there is a significant difference between a “release” and a specific file version. A release has a little more management involved and is a little more concerned with file and functional dependencies.

I agree 100% with this statement. File versions of stored procedures, views, and functions are pretty much interchangeable. However, data has historical dependencies. I can’t update data in a child table if the parent table hasn’t been created yet. A table transformation depends on all previous transformations of the same table to have been completed.

Nik’s comment describes how he is thinking about implementing the table updates. Here is an idea that I have borrowed from a UNIX-based product I used to support. I haven’t thought about this in depth, so I don’t have details worked out.

Create a table in the database that keeps track of data conversion history. The table contains a primary key column (a unique name for the conversion).

Have a master data conversion script that will contain all historical data conversions. Each conversion will have a unique name. The script will check if the conversion exists in the conversion history table. If it exists, the script does not execute that conversion. If it does not exist, the conversion is executed and a record is added to the conversion history table.

An alternative idea I have been toying with is to use the build version that we store in the database and forget the data conversion history table. A master data conversion script is still used. The master script retrieves the old build version. Each data conversion knows which build version it was created against, so it can make sure the old build version of the databsae was at or below the build version for the conversion.

Because we use exclusive checkouts, a master script may not be workable if we have two developers wanting to write conversions at the same time. Whatever mechanism we use to perform updates, we’ll need to be able to track which conversions need to be done and make sure they are done in the proper order.

We also have a problem with maintaining and updating “system” data. For example, we have a codes table that is used to populate drop-down combo boxes in our UI. We’ll need a mechanism to update these system data tables. Some tables contain a mix of developer-maintained and user-maintained data. For example, we have a table that stores system numbers (next sales order, next purchase order, next invoice, etc.). The NextNumber column needs to be preserved during an update, while the rest of the data needs to be refreshed during an update.

These are not unique problems. Every database development team faces these issues. There don’t seem to be well-documented solutions to these problems, however.

2 Comments

Can You Raed Tihs?

Roy Osherove posts about an interesting research finding on how we read. Lots of people have linked to his blog entry.

What I would like is to have the syntax checkers be able to comprehend these “misspellings” as easily as our brain does. That will eliminate 90% of my syntax errors, which are usually from typing “tlb” when I should have typed “tbl”.

I wonder if this fact, when more widely understood and accepted, will cut down on the editing time of books? After all, if we can comprehend the material even if it is misspelled, why spend the money to change it?

Leave a comment

Visual SourceSafe and SQL Server

At work, I’ve been investigating what will be involved to keep track of the database changes made by our programming team. So far, we’ve been able to work with a common shared database and mostly avoid stepping on each other’s toes. However, there are many shortcomings with not having the database managed with a source versioning system. The shortcomings are starting to cause pain to our managers, so we’ve been asked to do something about it.

From day one, we’ve kept our Visual Basic code in Visual SourceSafe. For a long time, I’ve wanted to work out a plan for doing this, but haven’t had the authorization to spend the necessary time until now. That hasn’t kept me from gathering bits and pieces of information about this issue. However, our needs are more than just storing the database objects in a SourceSafe database.

Some aspects of the process are very simple. We can easily version the views, stored procedures, user functions, and other basic objects. When you want to deploy these objects, you replace the existing copy with the newer version.

But what about table changes and system data? This is not a big deal when generating new databases, but we we need to deploy data conversions to other developers, QA databases, and to customer databases. Data conversions are not a simple “replace” functionality. They have dependencies. If a table has been changed twice since the version you have, the data conversions will likely need to be run in a particular order. In addition, you don’t want to accidentally run a data conversion a second time, as it may corrupt the data if you haven’t built safeguards into the process.

In my mind, deployment needs to occur from what’s in SourceSafe. It would be nice to have a process that helps support the deployment process as well as the simple process of keeping database objects in SourceSafe.

Finally, we’re doing our editing in Query Analyzer. I’d like to have a process that involves an editor that’s geared toward SQL, but has some of the features that are considered standard in a programmer’s editor (auto-indent, split windows, bookmarks, etc.).

Unfortunately, I’ve found that lots of small shops have this issue, but there aren’t many commercial solutions that fill our need. There are large enterprise solutions that might help, but a small development shop can’t afford a tool that would cost 5000 per developer. We could roll our own, but our team of seven developers is busy enough trying to churn out the new features and squash the bugs (some of which are caused by our informal process). I think we’re going to have to settle for a mix of products that help with some parts of the process and home-grown utilities to help with the other parts of the process.

Some other people on the web are also grappling with some of these issues. Here are some of the links I have found:

1 Comment

Personality types

Chris Anderson writes about taking a personality test.

I took the same test and came out with the same personality type (out of 16 types) as Chris did. I wonder if this might be a common type among programmers who have blogs?

Leave a comment

SQL bug fix

I got a chuckle out of this fix that is available for SQL Server 2000. If you have a query with 32000 or more OR clauses in it, don’t you have more problems than a bug in SQL Server 2000?

Leave a comment

Follow

Get every new post delivered to your Inbox.