On this page.... RSS 2.0 | Atom 1.0 | CDF
# Wednesday, 19 September 2007

Hi again.  Today was another good day at the conference. 

User Experience Distilled

The first class I attended was a whirlwind tour of user experience.  I was heartened to learn that I am not alone or crazy in recognizing that there are a number of disciplines that go into this thing we call UX, and the presenter, Jeff Patton, also recognizes that actually virtually every role in developing software has an effect on UX, which is also something I have come to the conclusion of (as I hint at on IG's UX area).  I develop the idea more explicitly in an unpublished paper I'm working on.  (I'm hoping the inputs I get from this conference will help me to finish that out.)  

I actually think that all of this UX stuff falls under the architect's purview because (in my mind at least) he or she is primarily responsible for designing the software as a whole.  This means that architects need to have a conversational familiarity (at least) with the different disciplines that people traditionally think of as user-oriented disciplines, but I'd take it a step further and say that the architect needs to be the chief experience officer, as it were, on a software project.  The architect needs to ensure that the appropriate expertise in user-oriented disciplines is brought to bear on his or her project and also needs to understand how the other aspects of software design and development impact UX and optimize them for good UX. 

That discussion aside, Jeff had a pretty clever graph that showed how the kind of software being developed affects the perceived ROI of expenditure on UX.  His talk also was about as effective an introduction to UX that I can imagine.  He dealt with what it is, why it's important, and then offered a high-level overview of key bits of knowledge for people to make use of.  I want to steal his slides! :)

Global Teams & Outsourcing Agilely

The keynote during lunch today was done by Scott Ambler.  It was nice to finally see/hear him in person since I've heard so much about him.  I got the feeling (from what he even admitted) that he was presenting stuff that wasn't just his--he was from what I could tell presenting an overview of a book that IBM publishes (related) on the subject.  But that didn't take away from the value of the knowledge by any means.  I'd definitely check it out if you're going to be dealing with geographically distributed teams.

Usability Peer Reviews

In my continuing quest to learn more about UX (part of which is usability), I attended a class by Larry Constantine about lightweight usability practice through peer review/inspection (related paper).  I was actually surprised because he has a very formal methodology for this, which means he's put a lot of thought into it but, more importantly, he's used it a lot in consulting, so it is tested.  I personally am not a big fan of being too formal with these things.  I understand the value in formalizing guidance into repeatable methodology, but I've always felt that these things should be learned for their principles and less for their strictures.  Of course, that runs the risk of missing something important, but I guess that's a trade off.  Regardless of if you follow it to a T or not, there's a ton of good stuff to be learned from this technique on how to plug in usability QA into the software process.

Applying Perspectives to Software Views

After that, I slipped over to another Rebecca Wirfs-Brock presentation on applying perspectives to software views in architecture.  (She was presenting the subject of this book.)  To me, the key takeaway was that we should figure out the most important aspects of our system and focus on those.  It echoed (in my mind) core sentiments of domain-driven design, though it used different terminology and approach.  I think the two are complementary--using the view approach helps you to think about the different non-functional aspects.  Using strategic DDD (in particular, distilling the domain) helps you and stakeholders to focus in on the most important aspects of the system from a domain strategy perspective, and that will inform which views and perspectives are the ones that need the focus. 

This approach also echoes the sentiment expressed by Evans yesterday that says you can't make every part of the system well-designed (elegant/close to perfection).  Once you accept that, you can then use these approaches to find the parts of the systems where you need to focus most of your energies.  I really like that this practical truth is being made explicit because I think it can help to overcome a lot of the problems that crop up in software development that have to do with the general idealistic nature that we geeks have.

Expo

After the classes today, they had the expo open.  In terms of professional presentation, it was on par with TechEd's Expo, but certainly the scope (number of sponsors) was far smaller.  That said, I popped into the embedded systems expo.  That was a new experience for me.  It was interesting to see almost every booth with some kind of exposed hardware on display.  As a software guy, I tend to take all that stuff for granted.  They even had a booth with specialized networked sensors for tanks of liquid.  This stuff stirred recollections of weird science and all the other fun fantasies that geeky kids have about building computerized machines.  The coolest thing there was the Intel chopper, which apparently was built by the Orange County Chopper guys, but it had a lot of fancy embedded system stuff on it.  I didn't stick around to hear the spiel, but it was pretty cool.

After the expo, I bumped into a guy at Cheesecake factory.  We started chatting, and it turns out that he's in the process of becoming a Roman Catholic deacon.  Pretty cool coincidence for me!  We talked about two of my top passions--my faith and software development (as exemplified here on dotNetTemplar!).  It was a good dinner.  He works at a company that does computer aided engineering; sounds like neat stuff with all that 3D modeling and virtual physics.  Way out of my league!

As I said, another good day here at SD Best Practices.

Wednesday, 19 September 2007 21:54:31 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 

I meant to write this last night, but I didn't get back to my room till late and just felt like crashing.  I'm at the SD Best Practices conference in Boston this week, which is a new experience for me.  It's one of a very few non-MS-oriented conferences I've attended, and I really wanted to come because best practices are a passion for me (and part of my job).  Infragistics was kind enough to send me.  I thought I'd share my experiences for anyone else considering going (and just for my own reference.. hehe)  Anyways, enough of the intro...

Day 1 - Tuesday, 18 September 2007

First off, let me say I like the idea of starting on a Tuesday.  It let me work for a good part of the day on Monday and still make it out here by train on Monday night.  I've found in the past that attending sessions non-stop for a few days can really wear you out, so four days seems about right.

The conference is in the Hynes convention center, and I'm at the Westin, a stone's throw away.  Also, it's right next to the Back Bay Station, so thus far the logistics aspect has worked out quite well for me.  I'd personally much rather take a train over a plane anytime. 

Responsibility-Driven Design

Tuesday was a day of "tutorials," which are half-day sessions.  So in the morning, I attended Rebecca Wirfs-Brock's tour of responsibility-driven design (RDD?).  I actually had her book at one point because it was mentioned in a good light by Dr. West in his Object Thinking, but somewhere along the line I seem to have lost it.  Anyways, I was glad to get a chance to learn from the author directly and to interact. 

From what I can ascertain, RDD has some good insight into how to do good object design.  It seems to me that thinking in terms of responsibilities can help you properly break apart the domain into objects if you struggle with just thinking in terms of behavior.  It's potentially easier than just thinking in terms of behaviors because while behaviors will certainly be responsibilities, objects can also have the responsibility to "know" certain things, so it is a broader way of thinking about objects that includes their data.

That said, it doesn't really negate the point of focusing on behaviors, particularly for folks with a data-oriented background because I do think that focusing on the behaviors is the right way to discover objects and assign them the appropriate responsibilities.  I think the key difference is that with the object-thinking approach, you know that there will be data and that it is important to deal with, but you keep it in the right perspective--you don't let it become the focus of your object discovery.

Another beneficial thing I think Ms Wirfs-Brock has is the idea of using stereotypes as a way to discover objects in the domain.  This is more helpful, I think, when dealing with objects that are more part of the software domain than those in the business domain because the stereotypes are very software-oriented (interfacers, information holders, etc.). 

In terms of process, she advocates this idea of having everyone on a team write their thoughts down about the problem being faced in a few sentences, focusing on what seems like it'll be a challenge, what will be easy, what you've run into before, etc.  Then have everyone bring those to the initial design meetings.  I like the idea because it bypasses the introvert-extrovert problem you sometimes get in meetings and you can start out with a lot of ideas to really jump sta

rt the design.  It's a good way to ensure you don't miss out on ideas due to personality issues.

The other thing I like in her process is writing down a purpose statement for objects as you discover them and thinking of them as candidates.  This is part of the CRC card process (the first C is now "candidates").  The reason I like it is that it helps you to focus on the point of the object and sort of justify its existence, which can help weed out some bad ideas. 

What I don't like about the process is the overall CRC card idea.  While it surely is more lightweight than many ways to approach object design, you still end up with a bunch of paper that you then have to translate into code at some point.  I much prefer to use a tool that will literally be creating the code as I design.  I've found the VS class designer serves this purpose quite well.  In fact, on the way up here, I spent some time doing up a sample class diagram using the object thinking approach to share as an example of domain modeling.  I'll be sharing it soon, but I just mention it to say this is not just speculation.  It was actually very lightweight and easy to discover objects and model the domain that way, and at the end I had literal code that I can then either fill out or hand off to other devs to work on who can then further refine it.

Domain-Driven Design

The second session I attended was one by Eric Evans on strategic domain-driven design.  Eric wrote a book on the subject that's been well received by everyone I've encountered who spent time with it.  I've seen a presentation on it, and I've read parts of Jimmy Nillson's Applying Domain-Driven Design and Patterns book.  So I thought I was acquainted well enough with the ideas, but as I often find to be the case, if you rely on second-hand info, you'll inevitably get a version of the info that has been interpreted and is biased towards that person's point of view.

For instance, most of what I've seen on DDD is focused on what Eric calls "tactical" DDD, i.e., figuring out the objects in the domain and ensuring you stay on track with the domain using what he calls the "ubiquitous language."  Eric presented parts of his ideas yesterday that he calls "strategic" because they are more geared towards strategic level thinking in how you approach building your software.  Two key takeaways I saw were what he calls context mapping, which seems to be a really effective way to analyze existing software to find where the real problems lie, and distilling the domain, which is a way to really focus in on the core part of a system that you need to design.

In short (very abbreviated), he claims (and I agree) that no large system will be completely well designed, nor does it need to be.  This isn't to say you're sloppy but it helps you to focus your energies where they need to be focused--on the core domain.  Doing this actually can help business figure out where they should consider buying off-the-shelf solutions and/or outsourcing as well as where to focus their best folks.  It's a pretty concrete way to answer the buy vs. build question.

Anyways, I'm definitely going to get his book to dig in deeper (it's already on the way).  Please don't take my cliff's notes here as the end of your exploration of DDD.  It definitely warrants further digging, and it is very complementary to a good OOD approach.

After all this, I was privileged enough to bump into Eric and have dinner, getting to pick his brain a bit about how all his thinking on DDD came together, his perspectives on software development, and how to encourage adoption of better design practices (among other things).  Very interesting conversation, one that would have been good for a podcast.  I won't share the details, but I'm sure folks will eventually see some influence this conversation had on me.  Good stuff.

Software for Your Head

I almost forgot about Jim McCarthy's keynote.  I've only seen Jim twice (once in person and once recorded).  He's a very interesting and dynamic speaker, which makes up for some of the lack of coherence.  I find the best speakers tend to come across a bit less coherent because they let speaking become an adventure that takes them where it will.  But I do think there was definitely value in his message.  I tend to agree that he's right in asserting that what we all do on a daily basis has a larger impact on humanity than we realize, and I can't argue with his experience in building teams that work.  http://www.mccarthyshow.com/ is definitely worth a look.

Overall, Tuesday was a big success from an attendee perspective.  So far so good!

Wednesday, 19 September 2007 11:17:20 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 17 September 2007

When searching recently so as to provide further reading for "domain model" in a recent post, I was quite surprised to find that there seemed to be no good definition readily available (at least not by Googling "domain model").  Since I tend to use this term a lot, I figured I'd try to fill this gap and, at the very least, provide a reference for me to use when I talk about it.

So What is a Domain Model?
Put simply, a domain model is the software model of a particular domain of knowledge (is that a tautology?).  Usually, this means a business domain, but it could also mean a software domain (such as the UI domain, the data access and persistence domain, the logging domain, etc.).  More specifically, this means an executable representation of the objects in a domain with a particular focus on their behaviors and relationships1.

The point of the domain model is to accurately represent these objects and their behaviors such that there is a one-to-one mapping from the model to the domain (or at least as close as you can get to this).  The reason this is important is that it is the heart of software solutions.  If you accurately model the domain, your solution will actually solve the problems by automating the domain itself, which is the point of pretty much all business software.  It will do this with much less effort on your part than other approaches to software solutions because the objects are doing the work that they should be doing--the same that they do in the physical world.  This is part and parcel of object-oriented design2.

Nothing New
By the way, this is not a new concept--OO theory and practice has been around for decades.  It's just that somewhere along the line, the essence of objects (and object-oriented design) seems to have been lost or at least distorted, and many, if not most, Microsoft developers have probably not been exposed to it, have forgotten it, or have been confused into designing software in terms of data.  I limit myself to "Microsoft developers" here because it is they of whom I have the most experience, but I'd wager, from what I've read, the same is true of Java and other business developers. 

I make this claim because everyone seems to think they're doing OO, but a concrete example of OOD using Microsoft technologies is few and far between.  Those who try seem to be more concerned with building in framework services (e.g., change tracking, data binding, serialization, localization, and data access & persistence) than actually modeling a domain.  Not that these framework services are not important, but it seems to me that this approach is fundamentally flawed because the focus is on software framework services and details instead of on the problem domain--the business domain that the solutions are being built for. 

The Data Divide
I seem to write about this a lot; it's on my mind a lot3.  Those who try to do OOD with these technologies usually end up being forced into doing it in a way that misses the point of OOD.  There is an unnatural focus on data and data access & persistence.  Okay, maybe it is natural or it seems natural because it is ingrained, and truly a large part of business software deals with accessing and storing data, but even so, as I said in Purporting the Potence of Process4, "data is only important in as much as it supports the process that we’re trying to automate." 

In other words, it is indeed indispensable but, all the same, it should not be the end or focus of software development (unless you're writing, say, a database or ORM).  It may sound like I am anti-data or being unrealistic, but I'm not--I just feel the need to correct for what seems to be an improper focus on data.  When designing an application, think and speak in terms of the domain (and continue to think in terms of the domain throughout the software creation process), and when designing objects, think and speak in terms of behaviors, not data. 

The data is there; the data will come, but your initial object models should not involve data as a first class citizen.  You'll have to think about the data at some point, which will inevitably lead to specifying properties on your objects so you can take advantage of the many framework services that depend on strongly-typed properties, but resist the temptation to focus on properties.  Force yourself to not add any properties except for those that create a relationship between objects; use the VS class designer and choose to show those properties as relationships (right-click on the properties and choose the right relationship type).  Create inheritance not based on shared properties but on shared behaviors (this in itself is huge).  If you do this, you're taking one step in the right direction, and I think in time you will find this a better way to design software solutions.

My intent here is certainly not to make anyone feel dumb, stupid, or like they've wasted their lives in building software using other approaches.  My intent is to push us towards what seems to be a better way of designing software.  Having been there myself, I know how easy it is to fall into that way of thinking and to imagine that simply by using these things called classes, inheritance, and properties that we're doing OOD the right way when we're really not.  It's a tough habit to break, but the first step is acknowledging that there is (or at least might be) a problem; the second step is to give object thinking a chance.  It seems to me that it is (still) the best way to do software and will continue to be in perpetuity (because the philosophical underpinnings are solid and not subject to change).

Notes
1. An object relationship, as I see it, is a special kind of behavior--that of using or being used.  This is also sometimes represented as a having, e.g., this object has one or more of these objects.  It is different from data because a datum is just a simple attribute (property) of an object; the attribute is not an object per se, at least not in the domain model because it has no behaviors of its own apart from the object it is attached to.  It is just information about a domain object.

2. I go into this in some depth in the Story paper in the Infragistics Tangerine exemplar (see the "To OOD or Not to OOD" section).  I use the exemplar itself to show one way of approaching domain modeling, and the Story paper describes the approach.

3. Most recently, I wrote about this in the Tangerine Story (see Note 2 above).  I also wrote publicly about it back in late 2005, early 2006 in "I Object," published by CoDe Magazine.  My thought has developed since writing that.  Interestingly, in almost two years, we seem to have only gotten marginally better ways to deal with OOD in .NET. 

4. In that article, I put a lot of focus on "process."  I still think the emphasis is valid, but I'd temper it with the caveat that however business rules are implemented (such as in the proposed workflow-driven validation service), you still think of that as part of your domain model.  The reason for separating them into a separate workflowed service is a compromise between pragmatism and idealism given the .NET platform as the implementation platform.  I've also since learned that the WF rules engine can be used apart from an actual .NET workflow, so depending on your application needs, just embedding the rules engine into your domain model may be a better way to go than using the full WF engine.  If your workflow is simple, this may be a better way to approach doing validation.

Monday, 17 September 2007 11:41:54 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, 15 September 2007

As I sit here on my deck, enjoying the cool autumn breeze1, I thought, what better thing to write about than Web services!  Well, no, actually I am just recalling some stuff that's happened lately.  On the MSDN Architecture forums and in some coding and design discussions we had this week, both of which involve the question of best practices for Web services.

Before we talk about Web services best practices, it seems to me that we need to distinguish between two kinds of application services.  First, there are the services that everyone has been talking about for the last several years--those that pertain to service-oriented architecture (SOA).  These are the services that fall into the application integration camp, so I like to call them inter-application services. 

Second, there are services that are in place to make a complete application, such as logging, exception handling, data access and persistence, etc.--pretty much anything that makes an application go and is not a behavior of a particular domain object.  Maybe thinking of them as domain object services would work, but I fear I may already be losing some, so let's get back to it.  The main concern within this post are those services using within an application, so I call them intra-application services.

It seems like these latter services, the intra-application ones, are being often confused with the former--the inter-application services.  It's certainly understandable because there has been so much hype around SOA in recent years that the term "service" has been taken over and has lost its more generic meaning.  What's worse is that there has been a lot of confusion around the interaction of the terms Web service and just plain service (in the context of SOA).  The result is that you have folks thinking that all Web services are SO services and sometimes that SO services are always Web services.

My hope here is to make some clarification as to the way I think we should be thinking about all this.  First off, Web services are, in my book at least, simply a way of saying HTTP-protocol-based services, usually involving XML as the message format.  There is no, nor should there be, any implicit connection between the term Web service and service-oriented service.  So when you think Web service, don't assume anything more than that you're dealing with a software service that uses HTTP and XML. 

The more important distinction comes in the intent of the service--the purpose the service is designed for.  Before you even start worrying about whether a service is a Web service or not, you need to figure out what the purpose of the service is.  This is where I get pragmatic (and those who know me know that I tend to be an idealist at heart).  You simply need to determine if the service in question will be consumed by a client that you do not control. 

The reason this question is important is that it dramatically affects how you design the service.  If the answer is yes, you automatically take on the burden of treating the service as an integration (inter-application) service, and you must concern yourself with following best practices for those kinds of services.  The core guideline is that you cannot assume anything about the way your service will be used.  These services are the SO-type services that are much harder to design correctly, and there is tons of guidance available on how to do them2.  I won't go in further depth on those here.

I do think, though, that the other kind of services--intra-application services--have been broadly overlooked or just lost amidst all the discussion of the other kind.  Intra-application services do not have the external burdens that inter-application services have.  They can and should be designed to serve the needs of your application or, in the case of cross-cutting services (concerns) to serve the needs of the applications within your enterprise.  The wonderful thing about this is that you do have influence over your consumers, so you can safely make assumptions about them to enable you to make compromises in favor of other architectural concerns like performance, ease of use, maintainability, etc.

Now let's bring this back to the concrete question of best practices for intra-application Web services.  For those who are using object-oriented design, designing a strong domain model, you may run into quite a bit of trouble when you need to distribute your application across physical (or at least process) tiers.  Often this is the case for smart client applications--you have a rich front end client that uses Web services to communicate (usually for data access and persistence).  The problem is that when you cross process boundaries, you end up needing to serialize, and with Web services, you usually serialize to XML.  That in itself can pose some challenges, mainly around identity of objects, but with .NET, you also have to deal with the quirks of the serialization mechanisms.

For example, the default XML serialization is such that you have to have properties be public and  read-write, and you must have a default constructor.  These can break encapsulation and make it harder to design an object model that you can count on to act the way you expect it to.  WCF makes this better by letting you use attributes to have better control over serialization.  The other commonly faced challenge is on the client.  By default, if you use the VS Add Web Reference, it takes care of the trouble of generating your service proxies, but it introduces a separate set of proxy objects that are of different types than your domain objects.

So you're left with the option of either using the proxy as-is and doing a conversion routine to convert the proxy objects to your domain objects, or you can modify the proxy to use your actual domain objects.  The first solution introduces both a performance (creating more objects and transferring more data) and a complexity (having conversion routines to maintain) hit; the second solution introduces just a complexity hit (you have to modify the generated proxy a bit).  Neither solution is perfectly elegant--we'd need the framework to change to support this scenario elegantly; as it is now, the Web services stuff is designed more with inter-application services in mind (hence the dumb proxies that encourage an anemic domain model) than the intra-application scenario we have where we intend to use the domain model itself on the client side.

If you take nothing else away from this discussion, I'd suggest the key take away is that when designing Web services, it is perfectly valid to do so within the scope of your application (or enterprise framework).  There is a class of services for which it is safe to make assumptions about the clients, and you shouldn't let all of the high-falutin talk about SOA, WS-*, interoperability, etc. concern you if your scenario does not involve integration with other systems that are out of your control.  If you find the need for such integration at a later point, you can design services (in a service layer) then to meet those needs, and you won't be shooting yourself in the foot trying to design one-size-fits-all services now that make so many compromises so as to make the app either impossible to use or very poorly performing.

My own preference that I'd recommend is to use the command-line tools that will generate proxies for you (you can even include a batch file in your project to do this) but then modify them to work with your domain model--you don't even need your clients to use the service proxies directly.  If you use a provider model (plugin pattern) for these services, you can design a set of providers that use the Web services and a set that talk directly to your database.  This enables you to use your domain model easily in both scenarios (both in a Web application that talks directly to the db as well as a smart client that uses Web services). 

It requires a little extra effort, but it means you can design and use a real domain model and make it easier easier to use by hiding the complexity of dealing with these framework deficiencies for consumers of the domain model.  This is especially helpful in situations where you have different sets of developers working on different layers of the application, but it is also ideal for use and reuse by future developers as well.

One of these days, I'll write some sample code to exemplify this approach, maybe as part of a future exemplar.

Notes
1. The weatherthing says it's 65 degrees Fahrenheit right now--at 1pm!
2. My observation is that it is safe to assume that when other people talk about services and Web services, these are the kind they're thinking of, even if they don't make the distinction I do in this post. 

Saturday, 15 September 2007 18:00:03 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 10 September 2007

I wasn't going to post about it, but after reading Don's post, I realized that I should so that I can thank those involved in presenting me with this honor.  I was surprised when I was contacted about being nominated to be an INETA speaker, and I was even more surprised when I heard that I'd been voted in.  Looking over the folks on the list, I feel hardly qualified to be named among them.

So without further ado, let me thank David Walker (who's an all around great guy and VP of the Speakers Bureau), Nancy Mesquita (who I've not had the pleasure to meet personally but has been very helpful in her role as Administrative Director), as well as everyone else involved on the Speaker Committee and others (whom I know not of specifically) in welcoming me into the INETA speaker fold.  It's a great honor--thank you. 

Now, I have to get back to work!  My group, UXG, just released Tangerine, the first of our exemplars, and now we're on to the next great thing!

Monday, 10 September 2007 10:19:19 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 27 August 2007

A week or so ago, I received this nice little letter saying that I'd been nominated to the Cambridge Who's Who, which purports to be an organization that recognizes industry professionals.  All I had to do was fill out a simple form online and I'd be entered, so I did this (never hurts to add a tick mark to your resume...).  A few days later, I was called (today) by them, and they asked for information about me, which I provided.  After congratulating me for being inducted, I was introduced to their "Platinum" and "Gold" membership options, which cost several hundred dollars.

At this point, I'm getting a tad suspicious, and being one who rarely buys something over the phone, I said thanks for the info but I'd have to think about it more.  It was at this point that the true colors of the whole deal became clear.  I was told that in order to publish my info and get me access to all these wondrous benefits of being a member, I needed to decide if I wanted to be gold or platinum.  I balked, saying that most industry accolades don't come with a price tag (at least not the ones I've received).  In fact, they tend to come with benefits.

Well, not so with the Cambridge Who's Who.  You have to pay hundreds of dollars for the honor of being a member.  Maybe for some, it'd be worth it.  But considering I'd never heard of them prior to the letter I was sent, I wasn't about to fork over cash to join.  The "services" they provide are publishing my info and connecting me to the other 250,000 notables.  Wait a sec.  Don't I get that and more for free using things like LinkedIn and Facebook? 

So if you get a letter from them, be forewarned.  Don't waste your time unless you intend to fork over a handful of cash for services you can get for free.

Monday, 27 August 2007 11:49:00 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, 14 August 2007

Thanks to a sharp co-worker of mine, I was recently introduced to "Magic Ink: Information Software and the Graphical Interface," by Bret Victor.  It was quite an interesting read; Victor makes a lot of good points.  For instance, he suggests that we should view information software as graphic design, i.e., taking the concerns of traditional graphic design as paramount and then taking it to the next level by availing ourselves of context-sensitivity, which he defines as inferring the context from the environment, history, and, as a last resort, interaction.

Minimizing Interaction

The thrust of the argument is around reducing interaction and making software smarter, i.e., more context aware and, eventually, able to learn through abstractions over learning algorithms.  I think we can all agree with this emphasis, but I do think he unnecessarily latches onto the term "interaction" as a bad thing, or rather, I think he presents "interaction design" in an overly-negative light. 

True, the smarter we can make computers (and consequently require less interaction from users) the better, but that doesn't negate the usefulness of interaction design, human factors, information architecture, and usability.  There are many, valuable things to be learned and used in all of these interaction-oriented fields, and we shouldn't deride or dismiss them because they focus on interaction.  I felt that Victor's negative emphasis on this and his speculating that why software sucks in relation to this took away from the value of his overall message.

The Problem of Privacy

There is one problem that I don't think he addressed in terms of increasing environmental context awareness, and that is security, specifically, privacy.  It is tempting to think about how wonderful it would be for a computer to know more about our environment than us and thus be able to anticipate our needs and desires, but in order to do this, we, as humans, will have to sacrifice some level of privacy.  Do we really want a totally connected computer to know precisely where we are all the time?  Do we really want it to be "reporting" this all the time by querying location aware services?  Do we really want a computer to remember everything that we've done--where we've been, who we've interacted with, when we did things?

I think the trickier issues with context awareness have to do with questions like these.  How do we enable applications to interact with each other on our behalf, requiring minimal interaction from us, while maintaining our privacy?  How does an application know when it is okay to share X data about us with another application?  Do we risk actually increasing the level of interaction (or at least just changing what we're interacting about) in order to enable this context sensitivity? 

If we're not careful, we could end up with a Minority Report world.  People complain about cookies and wire taps, the world of computer context-sensitivity will increase privacy concerns by orders of magnitudes.  This is not to negate the importance of striving towards greater context sensitivity.  It is a good goal; we just need to be careful how we get there.

Towards Graphic Design

One of the most effective points he made was in illustrating the difference between search results as an index and search results as a tool for evaluation itself, i.e., thinking about lists of information in terms of providing sufficient information for a comparative level of decision making.    It is a shift in how developers can (and should) think about search results (and lists in general).

Similarly, his example of the subway schedule and comparing it to other scheduling applications is a critical point.  It illustrates the value of thinking in terms of what the user wants and needs instead of in terms of what the application needs, and it ties in the value of creating contextually meaningful visualizations.  He references and recommends Edward Tufte, and you can see a lot of Tufte in his message (both in the importance of good visualizations and the bemoaning of the current state of software).  I agree that too often we developers are so focused on "reuse" that we fail miserably in truly understanding the problems we are trying to solve, particularly in the UI.

That's one interesting observation I've had the chance to make in working a lot with graphic/visual designers.  They want to design each screen in an application as if it were a static canvas so that they can make everything look and feel just right.  It makes sense from a design and visual perspective, but developers are basically the opposite--they want to find the one solution that fits all of their UI problems.  If you give a developer a nicely styled screen, he'll reuse that same style in the entire application.  In doing so, developers accidentally stumble on an important design and usability concept (that of consistency), but developers do it because they are reusing the design for maximum efficiency, not because they're consciously concerned about UI consistency!  It is a kind of impedance mismatch between the way a designer views an application UI and the way a developer does.

The Timeless Way

I'm currently reading Christopher Alexander's The Timeless Way of Building, which I hope to comment on in more depth when done.  But this discussion brings me back to it.  In fact, it brings me back to Notes on the Synthesis of Form as well, which is an earlier work by him.  One of the underlying currents in both is designing a form (solution, if you will) that best fits the problem and environment (context).  The timeless way (and patterns and pattern language, especially) is all about building things that are alive, that flow and thrive and fit their context, and the way you do that is not by slapping together one-size-fits-all solutions (i.e., reusing implementations) but in discovering the patterns in the problem space and applying patterns from the solution space that fit the problem space just so.  The reuse is in the patterns, at the conceptual level, but the implementation of the pattern must always be customized to fit snugly the problem. 

This applies in the UI as well as other areas of design, and that's the underlying current behind both Tufte's and Victor's arguments for the intelligent use of graphic design and visualization to convey information.  You must start by considering each problem in its context, learn as much as you can about the problem and context, then find patterns that fit and implement them for the problem in the way that makes the most sense for the problem.  But more on the timeless way later.

A Good Read

Overall, the paper is a good, thought-provoking read.  I'd recommend it to pretty much any software artisan as a starting point for thinking about these issues.  It's more valuable knowledge that you can put in your hat and use when designing your next software project.

Tuesday, 14 August 2007 10:41:14 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 30 July 2007

Are you passionate about software development?  Do you love to share your knowledge with others?  Do you like working in a vibrant, fun culture working on the latest and greatest technologies with other smart and passionate people?  If so, I think I may have your dream job right here.

We're looking for another guidisan to help craft guidance using best practices for .NET development.  The word guidisan ('gId-&-z&n) comes from a blending of "guidance" and "artisan," which really speaks to the heart of the matter.  We're looking for software artisans who have the experience, know-how, and gumption to explore strange new technologies, to seek out new applications and new user scenarios, to boldly go where other developers only dream of going in order to provide deep, technical guidance for their colleagues and peers.

What do guidisans do? 

  • Help gather, specify, and document application vision, scope, and requirements.
  • Take application requirements and create an application design that meets the requirements and follows best known practices for both Microsoft .NET and Infragistics products.
  • Implement applications following requirements, best practices, and design specifications.
  • Create supplemental content such as articles, white papers, screencasts, podcasts, etc. that help elucidate example code and applications.
  • Research emerging technologies and create prototypes based on emerging technologies.
  • Contribute to joint design sessions as well as coding and design discussions.

What do I need to qualify?

  • Bachelor’s Degree.
  • 4+ years of full-time, professional experience designing and developing business applications.
  • 2+ years designing and developing.NET applications (UI development in particular).
  • Be able to create vision, scope, and requirements documents based on usage scenarios.
  • Demonstrated experience with object-oriented design; familiarity with behavior-driven design, domain-driven design, and test-driven development a plus.
  • Demonstrated knowledge of best practices for .NET application development.
  • Accept and provide constructive criticism in group situations.
  • Follow design and coding guidelines.
  • Clearly communicate technical concepts in writing and speaking.

If you think this is your dream job, contact me.  Tell me why it's your dream job and why you think you'd be the next great guidisan.

Monday, 30 July 2007 15:01:27 (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 

Disclaimer
The opinions expressed herein are solely my own personal opinions, founded or unfounded, rational or not, and you can quote me on that.

Thanks to the good folks at dasBlog!

Copyright © 2017 J. Ambrose Little