On this page.... RSS 2.0 | Atom 1.0 | CDF
# Tuesday, April 11, 2006

Do you find yourself being overwhelmed by the amount of email flowing into your inbox?  Here are some tips I've used to keep my inbox nearly empty over the years.  And most of these tips extend to your real-world and voicemail inbox as well and will (I think) help you remain more sane and organized (and polite).

1 - Do not subscribe to email lists that you are not actively interested in.  This seems obvious to some people, but if you find yourself just deleting every message from a particular email list, newsletter, or other source of emails, just unsubscribe.  Maybe you were really keen on the particular subject at the time you subscribed; maybe you thought it'd be neat to stay on top of X, but if you find that's just not the case--that it's not important enough to you to do so, just cut it out; you can almost always subscribe again later if you get interested.

2 - Think of your inbox like a to-do list; it already is in a certain sense, but make it more formal in your head.  Anything in your inbox needs attention, and it needs it as soon as you can get to it.  The reason this is helpful is that it can help motivate you to not let things pile up.  It also lends towards other helpful things like the next tips.

3 - Try to take action on every email as soon as you read it.  If it requires a response, try to respond right away.  If you need to think on it, it's okay to leave it there for the next pass.  If you think it will be a while until you can respond like you think you need to and the missive is personal (from a real person to one or few persons), respond right away saying you're thinking about it and give a timeframe within which you intend to respond.  This is just polite and will probably save you from getting more emails from that person asking for a status.  If it is something from a list or newsletter that you are interested in, leave it there for the next pass.

4 - I mentioned the next pass in the previous tip.  This is simply a way of further weeding out your inbox, especially for non-personal emails.  If you truly didn't have time to properly take action on the first pass, the next time you sit down to look at your email, give everything a second look.  This takes far less time, typically, than the first pass, and allows you to quickly determine if you feel you can take action on the second pass items.  By the second pass, you should have taken action on 80% or more of the emails in the previous first pass.  Yes, I'm making the percentage up, but I'm just pointing out that if you're finding most emails in the inbox survive the second pass, you're probably not devoting sufficient time to it.  .NET developers can liken this process to .NET garbage collection, if emails survive the first pass, they're promoted to gen1, and so forth.  But the higher the generation, the fewer remaining emails there should be. 

5 - Aggressively delete.  Be willling to decide that you just are not going to get to something and either file it away or, preferably, delete it.  This only applies to non-personal emails that don't require a response (e.g., the newsletter/email list variety).  You may think that you'll get time some day to look at it, but I assure you, if it reaches the third pass and is still not important enough to make time for, you probably never will make time for it.  In my opinion, the only things that should survive the third pass are items that truly require action on your part but that may require more time than the average email response.  For instance, if you get a bill reminder, you can't just choose to delete and ignore that, but you may not have time until, say, the weekend to get to it.  It's fine to just let these lie, but leave them in the inbox so that you don't forget.  You should have very, very few emails that survive the third pass.  If you have lots, you're not giving your email enough time.

6 - I should mention that in the last three tips, there is implied prioritization.  In my book, emails from one person directly to you should always take precedence, even if you're not particularly keen on it (e.g., if someone is asking you for help, which happens for folks like me who publish helpful stuff).  I consider it rude to ignore personal emails, even from recruiters, so I always make an effort to respond, if nothing else than to say that I'm sorry that I don't have time.  To me, this is just common sense politeness, and I hate to say it, but it really irks me when folks don't extend the same courtesy to me.  The good news is that if you follow my tips, you can continue to be a polite person, at least in that regard, because your inbox never gets so full that you don't have time at least for the personal emails.  (And by "personal" I don't mean non-business; I mean from a real person to a real person, so business-related missives are not excluded from this rule.)

7 - Check your email proportionately to how often you get personal email.  It's okay to let newletters and lists pile up because you can safely delete huge swaths of those if they get piled up, but it is not okay (IMO) to let personal emails pile up.  If that's happening, you need to check email more often and/or make more time for it.  Maybe it's not your favorite thing, but it is just part of life.  If you're important enough, get someone to screen your emails for you.

If you follow these guidelines and still find your inbox piling up, you're either really, really important and famous, or you're just not being honest with yourself about what you do and don't have time for.  If nothing else, find a way to stay on top of the personal email.  Even if you don't like my approach to everything else, it is just the polite thing to do.

Tuesday, April 11, 2006 9:45:11 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 

At the recent Orlando CodeCamp, I did a presentation called (roughly) "Introduction to Generic Database Programming with ADO.NET 2."  Phwew.. gotta take a deep breath after that one.  So, the premise of the talk was to introduce folks to the features in ADO.NET 2.0 that make provider-agnostic (a.k.a., generic) database programming possible.  And I think that was accomplished in that it covered the System.Data.Common namespace and had an example that illustrated how to create connections using the DbProviderFactories class and the provider invariant name as well as how to retrieve metadata information using the GetSchema methods on DbConnection.

In my chapter for Professional ADO.NET 2, I created a very basic custom ADO.NET provider for Active Directory, which required me to implement most of those classes and provide the basic info from GetSchema.  So I was familiar with it from the theory side of things and thought it was pretty neat as an idea.  But I personally have not had the opportunity to really dig into doing it much myself; I prefer using an object-relational mapper when I can, and I've been fortunate enough to not be in too many situations where doing "real" generic db programming was necessary.  I also knew, in theory, that there were complications in creating database-agnostic code, but I didn't know just how unfinished it is.

Consider, for example, that Oracle returns results from stored procedures using an output cursor parameter instead of a regular result set; this can make it less than generic when you want to use sprocs.  And even something as simple as dynamic SQL command parameters is not fully addressed with the new ADO.NET 2 stuff.  It is this fact that prompted this blog entry.  I was trying to do something super simple--create a very focused test data generation tool that could use Oracle or SQL Server as its persistent data store. 

In particular, for the commands (trying to follow best practices), I was going to use ADO.NET parameters instead of string concatenation or substitution, and I recalled that one of the new standard schemas is a DataSourceInformation schema that specifies, among other things, a format string for parameter names.  In other words, as my friend Plip demonstrates on his blog, I should be able to use that value with String.Format in order to get a properly-formatted parameter name for my commands.

It turns out, though, that it just don't work, at least not in the RTM version.  (In Plip's defense, that post was made when Beta 1 was current.)  In the RTM, the SQL Server provider returns simply "{0}" as the parameter marker format string, which is, I'm sad to say, utterly useless.  It should return (in my humble opinion) "@{0}" to prepend the @ in front of the name as that's what SQL Server expects.  Looking at the Oracle provider, it correctly prepends the : using ":{0}" as the format string.  So, something that should be as simple as getting an appropriate parameter name is goofed up. 

Now maybe I'm missing something, but I know that folks like Frans Bouma, who have far more experience than I writing code to interface with multiple providers, seem to indicate that my impressions are not far off.  My own object-relational mapper completely abstracts the SQL itself such that providers have to supply SQL templates, so I'm not faced with writing generic SQL and can still take advantage of the abstractions provided by ADO.NET.

So I guess it really depends on what you're looking for when you say generic database programming.  For now, it seems only partially fulfilled, particularly since the queries themselves don't have sufficient standardized abstractions (that I'm aware of anyways).  I know there is a SQL standard, but as far as I can tell, it is fairly limited and doesn't cover simple things like (as Frans pointed out) getting an db-generated identity value back.

The good news is that there are good abstractions that can take care of a very large portion of the provider-specific work for us.  Obejct-relational mappers are one such example, and with the advent of LINQ and DLINQ, I hope that we'll find less and less need to actually write SQL ourselves and rely, rather, on these higher-level, object-oriented abstractions to hide the complexities of data persistence for multiple providers.  Generic database programming isn't a myth; it's just a legend, something based in truth but not a full reality.  What do you think?

Tuesday, April 11, 2006 8:26:01 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 

I just spent a few hours diagnosing what appears to be a bug in the new ListBindingHelper class used by, among other things, the Windows Forms DataGridView control.  If you call one of the GetListItemProperties methods, as the DGV does when automagically determining the columns based on your data source, it may ultimately call a helper method called GetListItemPropertiesByEnumerable.  This is a last ditch attempt to determine the properties based on the type of objects in your data source. 

Well, actually, it is the second choice; it will first look to see if your list class implements ITypedList and let it take over the work of getting the properties, but if your list class is one of the handy ones in the framework (an array, ArrayList, List<>, etc.), it will fall back on the GetListItemPropertiesByEnumerable method, assuming your list class implements IEnumerable, which most do (as an aside, it will then just get the properties on your list or, to be more precise, whatever the heck you passed into the method).

So the bug (or so I call it) is in that method.  I presume in an effort to be all high-performing and such, the developer wrote in a killer short-cut:

Type yourEnumerableListType = yourEnumerableList.GetType();
if (typeof(Array).IsAssignableFrom(yourEnumerableListType))
  return TypeDescriptor.GetProperties(yourEnumerableListType.GetElementType(),
           ListBindingHelper.BrowsableAttributeList);

Note that I took liberties of paraphrasing here, but if you know how ICustomTypeDescriptor works, you'll readily see the problem here.  You see, it works on instances, not types.  This means that your implementation of ICustomTypeDescriptor will never be called if you are binding an array, even if there are valid instances in the collection upon which it could be called.  I see this as a major bug, and I'm quite surprised it made it through QA.

You see, if your enumerable list class is not just a plain-Jane array, it will fall through that short-cut down to what it really should be doing, which is getting the first item from the enumerable list and calling TypeDescriptor.GetProperties (the object-based one), which will call your implementation of ICustomTypeDescriptor and allow you to substitute a property list of your choosing.

Now, there are certainly plenty of workarounds for this, and in fact, I wouldn't have run across it if it weren't for the fact that someone else wrote this method to return an array instead of (e.g., an generic List).  But that's not the point; the point is that it shouldn't be doing this.  If there are objects in the enumerable list, it should interrogate them, regardless of whether or not it is an array.

In any case, whether or not you agree with my classification of this as a glaring bug, I hope that maybe someone will benefit from the time it took to research this gotcha (because it most definitely is that).  The short answer, though not the ideal, is to just not bind to arrays and use either an ArrayList or generic list or collection or whatever.  And, by the way, if you're wondering how I figured all this out, there's a nifty little tool called .NET Reflector that provides many insights.  It's not as easy as debugging (because you can't actually step into the code), but if you're good at reading and following control flow, it can help you gain a much deeper insight into OPC (other people's code) when the docs just aren't sufficient.

Tuesday, April 11, 2006 7:00:02 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Sunday, April 9, 2006

Because of my pending interstate move, I spent some time yesterday going through my old electronic stuff.  I knew I had a few things I wanted to get rid of, so I got it all together.  Well, the next problem was what to do with it.  I knew I didn't just want to toss it out; not only would that be bad for the environment, but some of the stuff still works.  So anyhoo, I rattled around the internet for a bit, trying to find the best way to get rid of it.

Turns out, most orgs don't want my old electronic stuff any more than I do, so I thought maybe I could recycle.  Dell has a decent recycling program (you basically pay $10 for it), but you have to package it all up and ship it via DHL.  Not bad, but still involves cost and trouble.

I finally stumbled across freecycle.org.  It's basically a bunch of Yahoo groups, each specific to a particular area.  You can post your offer of free stuff on it, and folks will get back to you about it.  I thought, hey, this could work.  Pretty easy--just post a simple message.  So I did that, and within 10 minutes of the post being approved, I had five emails in my inbox from people wanting it.  I just picked the first that got there, emailed them, and they're picking the stuff up today. 

Totally awesome!  And it's not just for electronics--virtually anything you want to find a new home for (except for you or your children) can be offered there.  Now I'm just trying to figure out what else I can foist off on (err.. give away to) other people. :)  I thought it was cool enough that I wanted to spread the word; it's a great way to keep the land fills empty and potentially help others in the process.  You know how the old saying goes: "one man's trash is another's treasure," and this organization is the perfect proof of that.  So be green for free and go to freecycle.org to pass along your old stuff to others who really want it!

Sunday, April 9, 2006 3:44:27 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Saturday, April 8, 2006

One of the great Florida.NET guys, David Silverlight, has put together a pretty nifty community site called Community Credit.  The tag line is "we give stupid prizes to smart people."  The basic gist of the site is that you can enter your community activities and, depending on what they are, get a certain number of credits/points.  Every month, the top N community folks get awarded with stupid prizes.  So if you're doing dev community work, go sign up and start recording your points.  Not only will it help you track what you're doing, it will also let others know what you're up to and, hey, you might just win something.

Oh and, yeah, I know, it's been around for a few months; I've just been too lazy to blog about it until now.  So sue me! :)

Edit: I forgot to mention, you should also sign up for the newsletter.  David's got a pretty good sense of humor, and I think you'll enjoy it.

Saturday, April 8, 2006 1:40:32 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, April 6, 2006

This is for anyone who is "lucky" enough to have to use Oracle as a data source in .NET and wants to use the new features in the System.Transactions namespace (which is pretty cool, by the way).  In searching around for an answer to whether or not Oracle supports System.Transactions, I found that they do, but with a cost.  You see, if you use System.Transactions with Oracle (or any non-SQL Server 2005 transaction resource manager), all transactions will be coordinated through the Microsoft Distributed Transaction Coordinator (DTC).  This involves a fair amount of overhead in terms of processing and resources and can involve headaches when truly dealing with distributed transactions.

SQL Server 2005 takes advantage of one of the nifty features of System.Transactions, which is what they call Promotable Single Phase Enlistment (PSPE).  This means that by default, when you start a transaction using, say, TransactionScope, and the first data source in the transaction is SQL Server 2005, it will start out as a standard SQL Server transaction, meaning it would be equivalent to using SqlTransaction on a single connection.  However, if you add other data sources including (I presume) other connections to SQL Server 2005, it will promote that transaction to a distributed transaction managed by the DTC.

This is pretty cool because many of us use transactional programming against a single data source, so we can take advantage of TransactionScope and the so-called ambient transaction without incurring the overhead of DTC.  But unfortunately, as of writing, there is no such support for other transaction resource managers (such as the one included with the Oracle ADO.NET client).

My recommendation would be against using System.Transactions unless you are going to be having distributed transactions most of the time.  Hopefully Oracle will implement the PSPE for their ADO.NET provider, but in the meantime, it just isn't worth the overhead.

Thursday, April 6, 2006 1:41:04 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [3]  | 

I just ran into an interesting bug in the super simple role provider I wrote for a client.  If you see the following error:

Key cannot be null.
Parameter name: key

System.ArgumentNullException

with a stack trace of:

System.Collections.Specialized.HybridDictionary.get_Item(Object key)
System.Web.Security.RolePrincipal.IsInRole(String role)

Chances are that your role provider is returning a null as one of the role names from the GetRolesForUser method.  In my case, my data source was storing null role name values, and my provider just passed up to the UI whatever roles the data source gave me.  To fix, I added checks to not store null roles for users (long story as to why it is possible to do that in my case).  So that's probably what you need to look for if you see this exception scenario.

HTH!

Thursday, April 6, 2006 1:28:55 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, April 5, 2006

I warned you about Scott Guthrie, didn't I?  Subscribe to his blog or else! :)

Here you go!

Wednesday, April 5, 2006 9:22:52 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, April 1, 2006

If anyone's like me and is used to using pubs for sample applications, you might find that you miss it when you install SQL Server 2005.  Now I'm sure there are more ways to go about getting it back, but for the heck of it, I did a local hard drive search for pubs, and look what it turned up.  Not only do I have a script to install pubs but also Northwind and others.  The directory on my machine is E:\Program Files\Microsoft.NET\SDK\v2.0 64bit\Samples\Setup, but I figure it'll be under the Samples\Setup of wherever you happen to have the 2.0 SDK installed.   Ahh.. pubs.. dear old friend... 

Saturday, April 1, 2006 4:27:49 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [2]  | 
# Friday, March 31, 2006

In time, I hope to be able to look back on this day and think wow, was I ever that ignant?  Something that has plagued me and many others in ASP.NET 1.x has been resolved (no pun intended) in 2.0, viz. a static framework method to resolve an application-relative URL into a server-relative URL.  This is one of those problems that you feel like there must be a solution for, but maybe you just could never find it, or maybe you just didn't take the time because you created a solution that worked well enough.

In 1.x, on virtually every application I wrote, I ended up creating some utility method that would essentially do what the ~/ would do in a user control, which is equivalent to calling the ResolveUrl method.  The main idea, of course, is that we need a URL that is valid, regardless of the context, regardless of whether or not the application is being run on a development machine in a virtual directory or on a production server off the root.  Or maybe we just wanted to make it easy to change the virtual directory name--you know how fickle those users can be when it comes to what they want to see in the address bar! :)

So what did you do in 1.x?  Specifically when you didn't have a readily-available control to call ResolveUrl on?  If you're anything like me, DotNetNuke, or sundry others, you probably made some utility function that would squish the application path in front of the URL using the Request.ApplicationPath, if you could, or if you were really knowledgeable, the HttpRuntime.AppDomainAppVirtualPath property.  (Shh, don't tell anyone.. apparently, I'm not that knowledgeable as I just had this pointed out to me by David Ebbo; I usually hacked that one as well, storing it in my Global class by ripping it off of HttpRequest or worse, specified it in the web.config... oog).

Well, even if you do get the application path, you still have to write a function to munge it with the relative URL to come up with what I call the server-relative URL (i.e., an URL that starts with /).  Good grief!  Couldn't they have a function to do this that is static (doesn't require a valid request context)!?

Never fear, in 2.0, we have a neat new utility class called VirtualPathUtility!  You can read through the docs to figure out all the neat-o features, but the only one I particularly care about is ToAbsolute.  This one is, as far as I can make out, the functional equivalent of ResolveUrl, so now I can resolve server-relative URLs from app-relative ones without needing a valid request context and without writing my own functoid for it.  Woohoo!  Note: I say "server-relative" because the URL is relative to the server root.  For some reason, the MS folks call it "Absolute."  To me, absolute would include the full URL, including server.  I'm sure they had a good reason for it...

So that's it.  No more messing about with all that garbage.  Just declare all of your URLs using ~/, and call VirtualPathUtility.ToAbsolute to get a URL that should be valid regardless of your deployment environment.  Too cool! 

Edit: Forgot to give credit to David Ebbo on the Web Platforms and Tools Team at Microsoft for pointing out this new utility class to me.  Thanks, David!

Friday, March 31, 2006 9:50:42 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [2]  | 
# Friday, March 10, 2006

asp.netPRO is doing their yearly survey to see who's the best of the best.  I hope you will consider voting for ASPAlliance.  We've been working to improve your experience over the last year, and our vision is to keep improving and providing high-quality content for Microsoft developers in the future.

Also, if you don't know whom to vote for in hosting, go for Server Intellect.  They do an awesome job of providing affordable, high-quality hosting and enabling you to take control.

Go vote!

Friday, March 10, 2006 6:18:18 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, February 23, 2006

I just read an article in The Architecture Journal about this thing called BSAL (Behavioral Software Architecture Language).  The author bores us with the same old history of problems with docs becoming out of date, claiming that BSAL is the answer to all that, saying "the important addition that BSAL brings into play is the common, standard usage of a few building blocks in software definition," which are loosely "system, subsystem, state, behavior, and message objects." 

I'm having a real hard time getting excited about this.  In my (and many, many others') opinion, the real problem in business software engineering is not so much that documentation gets out of date (though that is definitely an issue).  The problem is that even our documentation--much less the implementation!--does not accurately describe or meet the actual business needs.  Further, because business needs constantly change, having a better way to do upfront design doesn't (and has proven not to) really solve the core problems; you need a way for your implementation to move with the business changes and, ideally, have that reflected in your design documents.

While I can see the value in having a more accurate overview of the system (and I'm really keen on behavior/process-based design), I think that BSAL is attempting to solve the problem from the solution domain perspective and not the problem domain perspective.  The behaviors that architects of business systems need to be concerned with, in my estimation, are not so much the behaviors of the system (in themselves) but the behaviors of the classes of business objects and, in particular, how those specific behaviors support a greater business process.  Any software architecture language that wants to solve business problems should speak in terms of the business so that the formalization of business processes and behaviors become the actual behaviors of the system.

And that's precisely why I see so much promise in the broader application domain-specific languages.  You can define DSLs that allow you to speak to specific problem domains.  Not only that, these languages can not only generate high-level stubs but can also potentially generate real, executable code.  They do so in ways that are far more pertinent to the problem at hand than any high-level, universal SAL like UML, which, by the way, what's so much greater about BSAL that UML doesn't provide for?

It is entirely possible that I'm not getting the real benefits behind BSAL, but as it was explained in that article, I just don't see it.  I think the software factories initiative has far more promise to make our industry better than yet another UML.

Thursday, February 23, 2006 2:56:55 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 

Disclaimer
The opinions expressed herein are solely my own personal opinions, founded or unfounded, rational or not, and you can quote me on that.

Thanks to the good folks at dasBlog!

Copyright © 2019 J. Ambrose Little