On this page.... RSS 2.0 | Atom 1.0 | CDF
# Wednesday, September 14, 2011


Courtesy of ZDNet

Much ado has been made in the recent months about the impending death of <insert name of MS UI stack here>. This week, at BUILD, Microsoft has finally stepped up and shown us what their view of the future is. Note that, in their architecture diagram, the current "desktop" technologies are represented as a peer. Included in the right side is Windows Forms, which of any .NET technology has long been exaggerated as dead; and yet it is still alive.

The point is, despite all the "future is Metro" talk by other analysts (e.g., Mary Jo herself), the fact remains that these are peer technologies, and Microsoft is not "killing" anything in the right side. In fact, there is no such intent expressed implicitly or explicitly.

That's not to say, of course, that nothing has changed. That's not to say that we can or should ignore Metro/WinRT (duh!). But there seems to be this common knee jerk reaction when a new technology is released to say that the current ones are now dead or somehow not worth investing in further. That reaction just doesn't reflect reality.

As impressed and (mostly) happy as I am about the direction expressed in the Win8 stack, we need to keep in mind that we are still in the early stages, still in gestation. The baby isn't even born yet, and once it is born, it will take time to grow up and mature. In the meantime, we have these mature, stable, currently released technologies that are great to build on.

I think it's great that Microsoft has released this stuff early. I like that about them better than other tech vendors. Although they've been more tightlipped about this than any other tech they've released, the fact remains we have plenty of time to plan, prepare, design, prototype, explore, and ultimately build for the new stack. In the meantime, we can still safely invest in the current technologies.

The future is uncertain. That is the nature of the future. Devs need to quit unrealistically asking Microsoft to guarantee them the future of technology. We know that it would be bad business for Microsoft to kill off these current technologies; so bad, we should feel it as a positive guarantee that they are here to stay for any future that we should be currently planning for. We will always have legacy. Someday, the Win8 stack will, I assure you, be legacy.

The things that remain constant are:

  • Understand the needs of your application context.
  • Understand the capabilities, strengths, and weaknesses of the various technologies you can build on, including current investments.
  • Understand your team's capabilities, strengths, and weaknesses, including current investments.
  • And choose the technology stack that makes the most sense, best balancing all these considerations, realizing that you won't make all the right choices (in retrospect) and that this is just life as a software professional.

Everything else is just a bunch of unnecessary worry and hullaballoo.

Wednesday, September 14, 2011 9:58:18 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, April 23, 2008

Just reading the first article in the latest edition of Microsoft's The Architecture Journal.  It's called "We Don't Need No Architects" by Joseph Hofstader.  I thought, oh good, someone voicing a dissident opinion, but the article is rather a rebuttal to that claim.  I figure maybe a response to the response is in order. :)

Mr. Hofstader suggests that architects think in terms of bubbles and devs think in terms of code and, by extension, only see part of the picture.  He describes various "architectural" activities such as analyzing the problem domain, choosing and applying appropriate technologies to solve problems, and the use of patterns.

Is it just me, or is this a sort of dumbing down of the developer role in order to support a, potentially unnecessary, distinction between it and "the architect"?  I mean, a smart developer needs to do all of these things, too.  They're not just code monkeys.

In fact, in introducing such a division in responsibilities, we would actually seemingly perpetuate a long-standing problem in software development--a disjuncture between the problem and solution space because we keep trying to insert these business translators (call them technical business analysts, software architects, whatever you want) into our methodology. 

What's wrong with this?  First, it puts the burden for understanding the business onto one (or a few) persons, but more importantly, it limits that mind share to those individuals.  That is never a good thing, but it is especially bad for software.  In so doing, it also puts a burden on those individuals to correctly interpret and translate (a considerable challenge) and finally to sufficiently communicate a design to developers--enter large specification documents, heavier process, and more overhead.

On the other hand, domain-driven design, for instance, is all about instilling domain knowledge into the solution and coming to a common alignment between the business and the solution creators.  It's axiomatic in business that you need a shared vision to be successful, and this approach to software creation is all about that.  Shared vision, mutual cooperation, and a shared language. 

It eliminates the need for a translator because both learn to speak the same domain language.  It eliminates the knowledge bottlenecks (or at least really reduces them), and it increases shared knowledge.  And DDD is not burdened with the distinction between an architect and a developer.  Agile methodologies in general are geared towards reducing barriers and overhead in the creation of software (and that's why they're generally more successful, and they can scale).

I hope that all the brilliant and more-well-known/respected folks will forgive me; this is not intended as a slight, but I have to ask--are we creating the "architecture" profession unconsciously just to create a more defined career path (i.e., a way for us techies to move up the ranks)?  Are we just going with the flow from an old but broken analogy?  Are we introducing roles that really would be better served through other, non-architecty roles?

To this last point, I see some folks suggesting "infrastructure" and "business" and "software" and "whatehaveyou" architects.  Why are we so keen on the term "architect"?  I'll grant, it does sound really fancy, but it is so, so painfully clear that it is ambiguous and overloaded (and inaccurate, if you ask me) .  Maybe these other roles do need to exist in some organizations, but it seems like we're just bent on calling them "architect" for no apparent good reason other than we've latched onto it as a respectable (and well-paid) moniker. 

In choosing to proliferate the "architect" terminology, we're perpetuating and extending the confusion around it.  We're purporting to solve the problem of it being ill-defined, but in reality we're doing the opposite.  And everyone (IASA, Open Group, Microsoft, to name some just in the latest issue of the Journal) is trying to do it all at once with little coordination. 

It seems borderline insane. 

Or maybe I'm the crazy one?

there is no spoon

Wednesday, April 23, 2008 3:15:42 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, April 15, 2008

I'm becoming more and more averse to the term architecture and architect in terms of creating software, partially because it is such an overloaded term that seems to cause so much continual opining about its meaning but, more importantly, because I don't think it is like what we do, at least not to the extent that seems to be commonly thought.

We are not building houses (or bridges, or skyscrapers, or cities).  Houses, and other physical constructions, rely on pretty much immutable laws of nature, of physics, chemistry, etc.  These sciences are sciences in the established sense--you can perform experiments repeatedly and get the same results, and others who perform those experiments will get the same results.  Physical building, architecture, and engineering is fundamentally a scientific endeavor because it is essentially serving scientific laws.1

Software Serves Human Social Needs
Software, on the other hand, is fundamentally a human and social endeavor.  Above the basic electrical and magnetic level, i.e., hardware, it is purely human constructs built on layers of human-created abstractions built to serve human social needs--for, ultimately, business or pleasure.  As such, we (as a human industry) are pretty much free to create the abstractions as we see fit. 

Beyond the basic hardware translation layer, we are not bound by elemental laws, only by our imagination.  The problem is, it seems to me, that early software development was very closely tied to the electrical engineering disciplines that gave birth to computing machinery, so the early abstractions were engineering-oriented and assumed an unnecessary scientific and engineering bent.  Subsequent developments, for the most part, have built on this engineering basis, and our educational system has perpetuated it.  Even though relatively few software creators these days need to understand the inner workings of the hardware (and even one layer of abstraction up), such low-level engineering is at the core of many computer science curricula.

As the power of computing machinery has grown, we've expanded the uses of software to take advantage of the new power, but we have remained an essentially engineering-based culture and have accrued other engineering-related words such as architecture and architect.  We have engineers and developers, systems analysts, and architects.  We have projects and project managers, and many try to manage software projects as if they were building projects.  We have builds, and we say we're building or developing or engineering software.

We have, built into our very language, an implicit association with physical building, and we have the association repeatedly reinforced by those who want to draw direct analogies between our trades.  Certainly, there are similarities, but I tend to think much of those similarities have been manufactured--they're not inherent to the nature of software.  We've painted ourselves into a corner by such analogies and borrowing of techniques and language.

Perceived Crisis of Complexity and Terminology
Now we're having this crisis, as some seem to paint it, where we need to elaborate further and push the idea that our systems are like cities and that we need varying levels and kinds of architects to help plan, build, maintain, and expand these software cities.  We have folks struggling to define what an architect is, what architecture is, and creating various stratifications within it to expand on this analogy.  We purportedly need enterprise architects, solutions architects, infrastructure architects, data architects, and more.

There is I think a well-intentioned effort to fix it because we do see this corner we've painted ourselves into, but we're reaching for the paint brush and bucket to solve it--reaching for those same ill-fashioned analogies, techniques, mindset, and culture.  We see all this accrued complexity, and our solution is to make things even more complex, both terminologically and systematically, because we're engineers and scientists, and scientific problems are solved with scientific methods and precision, no?.

It seems the underlying problem is that we're approaching the problem all wrong.  The problems we're solving are fundamentally human problems, particularly social problems.  And by social, I don't mean social networking software that is now en vogue; I mean social in the basic sense of dealing with interactions between humans, be that economic, entertainment, education, social connection, or whatever.  It follows, then, that the best solution will be fundamentally human in nature, not scientific, not engineering.

Realigning with Our Core Problem Domain
Maybe we should avoid likening ourselves to engineering and scientific disciplines, and especially, we should shun terminology that ties us to them and binds our thinking into those molds.  As a man thinks, so is he, as the saying goes.  Surely, we can and should learn what we can from other disciplines, but we need to be more reticent to insinuate them into our own as we have done with building.

I do think various solutions have been tried to better align software with its problem domain.  Object-oriented design is at a generic level an attempt to urge this sort of alignment, as is its more developed kin, domain-driven design.  Agile and its like work toward human-oriented processes for creating software.  Natural language systems, workflow systems, small-scale (solution-level) rule engines, and even some higher-level languages have attempted this.  And in fact, as a rule, I think they succeed better than those more closely tied to the computing and building conceptual models, except that even these more human-oriented abstractions are chained by the lower level abstractions we've created.

What we need to do is continue to develop those human-oriented models of creating software.  It seems that we may be at a breaking point, however, for our continued use of the building paradigm.  Our repeated struggles with the terminology certainly seem to speak to that.  Our terribly confused and complicated enterprise systems landscape seems to speak to that.  Our control-driven, formal, gated processes have been most well shown to be broken and inappropriate to the task of software creation.

New Terminology
To make the next step, perhaps we should reexamine at a fundamental level how we think about software, both the artifacts and how we create them.  I think we need to make a clean break with the engineering and building analogy.  Start fresh.  Unshackle our minds.  Maybe we need to drill down the abstraction layers and figure out where we can most effectively make the transition from hardware control to our human, social domain.  I imagine it would be lower than we have it now.  Or maybe it is just a matter of creating a better language, an intentional language (or languages) and a move away from our control-oriented languages. 

At a higher level, we certainly need to rethink how we think about what we do.  Some folks talk about the "architect" being the "bridge" (or translator) between the business and the technical folks.  If that is a technical role, which I tend to doubt, it seems like a more appropriate title would be Technical Bridge or Technical Translator or Technical Business Facilitator or even just Software Facilitator.  Call it what it is--don't draw unnecessarily from another dubiously-related profession.

But maybe thinking this role is best served with a technical person is not ideal.  Maybe we technical folks are again trying to solve the problem with the wrong tools--us.   Well-intentioned though many are, if we are technical in tendency, skills, talent, and experience, we are not as well equipped to understand the squishy, human needs that software serves or best identify how to solve such squishy human problems.

Since software is essentially a human-oriented endeavor, perhaps we need a role more like that which has been emerging on the UX side of things, such as [user] experience designer or interaction designer.  They are better-equipped to really grok the essentially human needs being addressed by the software, and they can provide precise enough specification and translation to technical folks to create the experiences they're designing, even with the tools we have today.

Then again, some say that architects are the ones concerned with high-level, "important" views of a solution, interactions among individual pieces, that they are those who model these high-level concerns and even provide concrete tools and frameworks to help effectively piece them together.  I say that we could call this role solution coordinator, solution designer, or solution modeler.  But then, according to folks like Eric Evans, these folks should be hands-on to be effective,2 which I also believe to be true.  In that case, what they become, really, is a kind of manager or, simply, team leader, someone who's been there and done that and can help guide others in the best way to do it.  At this point, the skills needed are essentially technical and usually just a matured version of those actually crafting the solution. 

Instead of software developers and architects, how about we just have technical craftsmen?  The term is appropriate--we are shaping (crafting) technology for human use; it also scales well--you can add the usual qualifiers like "lead," "manager," "senior," whatever fits your needs.  There's no unnecessary distinction between activities--whether the craftsman is working on a higher-level design or a lower-level, it is all essentially the activity of shaping technology for human use.  Depending on the scale of the team/endeavor, one craftsman may handle all levels of the craft or only part, and in the latter case, the division can easily be made based on experience and leadership.  And finally, it does not introduce cognitive dissonance through extremely-overextended and inaccurate analogy (like developer and architect).

Even if you don't like the term craftsman--we could collaborate to choose another that doesn't chain us to wrong thinking--the point remains that we should recognize that we've introduced unnecessary and unhelpful distinction in our discipline by using the dev and architect terminology.  We could begin to solve the conundrum by abandoning these titles.

Resisting the Urge to Rationalize and Control
Also, by looking at each solution as a craft--an individual solution tailored to address a particular human problem, it becomes clearer that we need not be so ready to try to rationalize all of these solutions into some greater system.  As soon as we do that, we fall back into the engineering and computing mode of thinking that will begin to impose unnatural constraints on the solutions and inhibit their ability to precisely and accurately solve the particular human need.3 

As I suggested before, we should rather treat these solutions more like a biological ecosystem--letting selection and genetic mutation mechanisms prevail in a purely pragmatic way that such systems have so well embedded in their nature.  I believe it is a misplaced good intention to try to govern these systems in a rationalistic, control-driven way.  We deceive ourselves into thinking that we are managing complexity and increasing efficiency when in reality we are increasing complexity that then, recursively, also has to be managed in such a philosophy (creating an infinite complexity management loop).  We also reduce efficiency and effectiveness (well-fittedness) of solutions by interfering with solutions with controls and imposing artificial, external constraints on them to serve our governance schemes.4

Wrapping It All Up
Once we stop trying to align ourselves with a fundamentally different endeavor--physical building--we free ourselves to essentially orient what we're doing towards the right domain--human social problems.  In doing so, we can re-examine our abstraction layers to ensure they most effectively fit that domain at the lowest possible level, and then we can start building new layers as needed to further enable effective (well-fitted) solutions for that domain.  By changing our language, we solve cognitive dissonance and illuminate where distinctions are truly needed, or not needed, and may even recognize where skills that are not inherently technical would better serve our solutions (such as UX pros).  And lastly, by treating the solutions as fundamentally human, we recognize that the most efficient, effective, time-tested5 and proven technique for managing them is more biological and less rational.  We see that they can best manage themselves, adapting as needed, to fit their environment in the most appropriate way possible.

If we're going to have a go at fixing the perceived current problem of complexity in software and, by extension, further understand how to solve it through our profession, I suggest that a somewhat radical departure from our current mode of thinking is needed, that we need to break away from the physical building analogy, and it seems to me that something like what I propose above has the most long-term promise for such a solution.  What do you think?

Notes
1. I should note that I recognize the artistic and ultimately social aspects physical constructions; however, they are still fundamentally physical in nature--bridges are physically needed to facilitate crossing of water or expanse, buildings are needed physically for shelter.  The social aspects are adornments not inherent to the basic problems that these constructions solve.  The same cannot be said of software; it exists solely to serve human social advancement in one form or another.
2. See Eric Evan's "Hands-On Modeler" in Domain-Driven Design: Tackling Complexity in the Heart of Software.
2. As an aside, I truly do wonder why we should have to try to convince businesses of the need for the "architect" role.  If you ask me, the need, and our value/solution, should be obvious.  If it takes a lot of talking and hand waving, maybe we should question if the solution we're proposing is actually the right one.  ?
3. I have to nuance this.  Obviously, if there are governmental regulations you have to follow, some such controls are required; however, if you think about it, this is still adapting the solution to best fit the human problem because the human problem likely involves some need of societal protection.  Certainly not all systems need such controls, and even only some within an organization need them.  Keep the controls scoped to the solutions that require them due to the human social conditions.  On the whole, I'd say that there are vastly far more systems that don't need them, though the ones that do loom large in our minds.
4. By this I mean to say that, according to evolutionary theory, biological processes have developed over many millions of years and have proven themselves as an effective means for highly emergent, living systems to self-govern.  Businesses and human social structures in general, especially these days, are highly emergent, dynamic, and living and need software that reflects that mode of being.

Tuesday, April 15, 2008 4:04:01 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Saturday, April 12, 2008

In his article, "Who Needs an Architect?", Martin Fowler says:

At a fascinating talk at the XP 2002 conference1, Enrico Zaninotto, an economist, analyzed the underlying thinking behind agile ideas in manufacturing and software development. One aspect I found particularly interesting was his comment that irreversibility was one of the prime drivers of complexity. He saw agile methods, in manufacturing and software development, as a shift that seeks to contain complexity by reducing irreversibility—as opposed to tackling other complexity drivers. I think that one of an architect’s most important tasks is to remove architecture by finding ways to eliminate irreversibility in software designs.

How interestingly this melds with my recent thoughts on managing complexity.2 You see, adding processes, management systems, and "governance" in general makes things more ossified, more difficult to change, i.e., less reversible.  According to Zaninotto, this would mean that the more governance we put in place to, theoretically, manage the complexity of our software systems, the more complex they are bound to become, which I think logically means that we are increasing our complexity woes rather than helping them through such efforts.

I came across this in a recent thread on our (now-retired-)architect MVP email list, where the age-old discussion of "what is an architect?" has come up again.  I have to admit, when I first seriously confronted this question, I was drawn in and fascinated.  I even wrote an article about it on ASPAlliance.3  Since writing that, I've been keeping an eye on the developments at IASA and elsewhere in this space, and developing my own thoughts.

I've delved even more into agile approaches, particularly Scrum and domain-driven design (DDD), and into this thing we call "user experience,"4 which at first glance seems counter to our architectural/engineering approaches to building software.  I've gained more experience building software as an architect and manager and observing software being built at the commercial level.  I've been more involved in the business and marketing side of things, and I've been blessed with the opportunity to learn from some of the leading minds in our profession. 

At this point, I'm of the get 'er done school, which I suppose might map loosely to Fowler's Architectus Oryzus, Eric Evans' Hands On Modeler, and others along those lines.  I'm bought into User-Centered Design (or human-centered design, for those who prefer that), though I think we need to figure out a good way to merge DDD with UCD and a smattering of service orientation (as needed!) to make software the best it can be.

Software is complex enough without our making it more so with artificial taxonomic and gubernatorial schemes.  Software should be teleological by nature.  It exists to serve an end, a purpose, and if it isn't serving that purpose, the answer is not to create counterproductive metastructures around it but rather to make the software itself better.

One of the chief complaints about IT is that we seem resistant to change or at least that we can't change at the speed of business.  Putting more processes, formalization, standardization, etc. in place exacerbates that problem.  The other biggie is that software doesn't meet the need it was designed to meet.  Both of these, at their core, have the same problem--ineffective and inefficient processes that are put in place to manage or govern the project.

I tend to think that projects need managing less than people need managing or, rather, coaching.  You get the right people, you give them the equipment, the training, and the opportunity to do the right thing, and you get out the way and help them do it.  You don't manage to dates (or specs!); you manage to results.  If you don't have a solution that meets or exceeds the need at the end of the day, you failed.  In fact, I might go as far to say that if what you built matches the original specs, you did something wrong.

Any managerial involvement should have a concrete and direct end in mind.  For instance, coordination with marketing and other groups requires some management, but such management should be communication-oriented, not control-oriented.  Start small and evolve your management over time.  Management, like other things that are designed, is best evolved over time5 to meet these concrete, real needs--and you should keep an eye out for vestigial management that can be extracted. 

Similarly, I don't think we need to tackle the software (IT) profession by trying to define and stratify everything we do.  In fact, I feel it would be a rather monumental waste of our collective valuable time.  One thing is certain, our profession will change.  New technologies and new ideas will combine with the rapidly changing business needs, and new roles will emerge while old roles will become irrelevant (or at least subsumed into new roles).  Monolithic efforts at cataloguing and defining (and by extension attempting to control) will, in the best of all possible worlds, be useful only for a short time.

It's clear that there are many approaches to doing software.  It's axiomatic that there are many distinct, even unique business needs (inasmuch as there are many unique individuals in the businesses).  What we should be doing, as a profession, (indeed what I imagine and hope most of us are doing) is focusing on how to make great, successful software, not wiling away our lives and energy talking about ourselves. 

If you ask me what I do (e.g., on a demographic form), I tend to put software maker, or just software.  Obviously, that's not specific enough for hiring purposes.  But really, in hiring, we're really looking for knowledge, experience, skills, talents, and attributes, not a role or title.  A title is just a hook, a handy way to get someone interested.  If the market shows that using "architect" in a title catches the attention you want, use it (whether you're a worker or looking for workers).  The job description and interview process will filter at a finer level to see if there's a match.

Outside of that, we don't really need to spend a lot time discussing it.  We're all just making software.  We all have unique knowledge, experience, talents, skills, and attributes, so there really is very little use in trying to categorize it much beyond the basic level.  So how about we stop agonizing over defining and stratifying "architecture" and "architect," stop worrying about controlling and governing and taxonomifying, and instead invest all that valuable time in just doing what we do--better!?

Notes
1.  More at http://martinfowler.com/articles/xp2002.html.
2. See One System to Rule Them All - Managing Complexity with Complexity and Software as a Biological Ecosystem.
3.  Read "What is Your Quest?" - Determining the Difference Between Being an Architect and Being a Developer.
4.  Good place to start: http://en.wikipedia.org/wiki/User_experience
5.  This principle is discussed, for example, in Donald Norman's Design of Everyday ThingsChristopher Alexander also discusses a similar principle in The Timeless Way of Building.

Saturday, April 12, 2008 4:06:39 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, March 19, 2008

Just wanted to let you all know that I'll be speaking at and attending the upcoming ITARC in NYC, May 22-23. The conference is grass roots and platform agnostic. Grady Booch is giving the keynote from 2nd life. There are some great roundtables and panel discussions on SOA, SOAP vs. REST, as well as others.

It should be a good opportunity to get involved with the local architecture community and participate in discussions on what is currently happening. The registration price is lower then other conferences because we are non-profit and just trying to cover the costs.

There is an attendance limit and the early bird registration ends this month so we encourage you to sign up as soon as possible.  Register Now!

Wednesday, March 19, 2008 10:10:19 AM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, January 11, 2008

After posting my ramblings about software as a biological ecosystem last night, I kept thinking a bit more about the topic of managing complexity and what seems to be the high-end industry response to it.  Put simply, it seems that we're trying to manage complexity with yet more complexity (the whole adding gasoline to the fire analogy).  The more I think about it, the more absurdly ludicrous this approach seems.

And it suddenly came to me--we are seeking The Ring:

One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them.

This is the solution the industry seems to propose with things like enterprise rule management software and other centralized IT governance initiatives. 

One Policy to rule them all, one Policy to find them,
One Policy to bring them all and in the darkness bind them. 
[Feel free to substitute System, Architecture, or any other grandiose schemes.]

Do we really want to be Dark Lords?  Is "the architecture group" the land where the shadows lie?  I guess some might indeed aspire to be dark lords ruling from a land of shadow, but it never ends well for dark lords.  As the history of Middle Earth shows (and indeed human history), you can't oppress life, creativity, passion, and freedom, at least not for long.  The yoke of tyranny will always be thrown off.  Life will find a way.  Attach other pithy axiom here.

Create software, systems, and policies that are alive, that encourage life, that can grow, adapt, and evolve.

Friday, January 11, 2008 9:19:33 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 

This thought occurred to me the other day.  Maybe the right approach to managing complexity in business software is something akin to creating a biological ecosystem.  By this, I mean designing correcting mechanisms to address chaos as it emerges and, ultimately, (the dream) would be designing systems that are biologically aggressive, that is, they look for niches to fill and also take steps to preserve themselves.

I don't know.  I'm sure I'm not the first person to think about this.  It just hit me the other day as I was walking into work.  It seems like the more common approach we take is to try to create a mechanical system as if the complexities of human interactions (i.e., business) can be specified and accounted for in a closed system.

I attended a session on managing complexity at the ITARC in San Diego last October, and the presenter was, if I recall correctly, advocating the usage of more precise specification of business rules through the use of Object Role Modeling (and in fact Dr. Terry Halpin was in attendance at that session and was a active participant).  I had attended another session the previous day by a fellow from Fair Isaacs on business rule management software.

All of these folks struck me as very intelligent and knowledgeable, and yet it seems to me that they are going in exactly the wrong direction.  In fact, I left that conference feeling very whelmed.  I felt as if I were living in a separate universe; at least I got the sense that there is a software multiverse, parallel software development universes, with me living in one and a lot of those guys in another.  All this talk of "governance" and highfalutin systems (e.g., grid SOA) leaves one feeling so disconnected from the everyday experience of being a software professional.

It seems to me that the solution to complexity in IT is not to create ever more complex mechanical systems, policies, and infrastructure to "govern" the problem.  It seems like that's throwing gasoline on the fire.  Not only that, it seems fundamentally opposed to the reality that is business, which is essentially a human enterprise based on humans interacting with other humans, usually trying to convince other humans to give them money instead of giving it to some other humans that want their money.

Because humans are intelligent and adaptable, particularly humans driven by, dare I say, greed (or at least self-preservation), these humans are constantly tweaking how they convince other humans to give them money.  The point is, business is fundamentally a human and an aggressively biological, enterprise.  It consists of humans who are constantly on the lookout to fill new niches and aggressively defending their territories.  So it seems to me that business software should be modeled, at a fundamental level, on this paradigm rather than on the mechanical paradigm. 

Of course, the problem is that the materials we're working with are not exactly conducive to that, but therein lies the challenge.  I tend to think that the efforts and direction being made by the agile community and approaches like domain-driven design are headed in the right direction.  At least they're focusing on the human aspects of software development and focusing in on the core business domains.  That's the right perspective to take.

Extend that to IT governance, and that means giving various IT departments within an enterprise the freedom to function in the best way that meets the needs of their local business units rather than trying to establish a monolithic, central architecture that attempts to handle all needs (think local government versus federal government).  It means developing with a focus on correction rather than anticipation, building leaner so that when change is needed, it is less costly (in a retrospective sense as well as in total cost of ownership).

I'm not advocating giving ourselves over to the chaos; I'm just thinking that this is a better way to manage the chaos.  And as we learn the best patterns to manage complexity in this way, it seems not too far a stretch to think that we could start automating mechanisms that help software systems be ever more agile and ultimately even anticipate the change that is needed by the business, either automatically making the adjustments needed or at the very least suggesting them.  That would be true business intelligence in software.

Maybe it's a pipe dream, but I think that without such dreams, we don't improve.  At the very least, I think it suggests that the agile approach to software is the right one, and that this approach should be extended and improved, not only in software development but also in architecture and IT in general.

What do you think?

Friday, January 11, 2008 12:01:33 AM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [2]  | 
# Sunday, December 09, 2007

As I read1 the works of Christopher Alexander, I grew increasingly concerned that the software industry may be missing the point of patterns.  Well, maybe that's not the right way to put it.  I think we may be missing the real value that patterns bring to the table.

For whatever reason, it seems we approach them (broadly speaking) almost as loose algorithms to be applied here and there as it seems fit.  Or maybe we just see them as convenient ways to talk about things we already know, or maybe we even use them to learn particular solutions to particular problems.  And then maybe we just use them because it is en vogue. 

It seems to me that the real value to derive from patterns (as an idea, not necessarily as they are often proposed in the software world) is in learning to see and think about creating software in the best way.  What Alexander proposes at the end of The Timeless Way is that it isn't using patterns or pattern languages, per se, that give our creations the quality without a name.  No, he proposes that the value lies in helping us to recognize the quality and teaching us to build in the timeless way.

The timeless way is more than patterns.  The thing is, patterns help us to get there.  I think in some ways, we do get it.  Those who are really into patterns do seem to recognize that patterns are not the solution to everything.  The problem is, I think, in that we are not using patterns in the most profitable way. 

I think part of the problem is in not using patterns as a language.  We have numerous catalogues of patterns.  To be sure, we do not lack for patterns, and sure, there is obviously value just in having these catalogues and in using the patterns here and there.  But I think that as long as we see patterns as individual things in a pattern catalogue, we won't use them to their full effectiveness.

Perhaps what we need to do is to figure out how to use them as a language.  Perhaps we need to weave them into our thoughts so that when we approach the problem of building software, patterns are there, guiding our thinking, helping us to best arrange a solution to fit the problem.  When we use our natural language, it does the same thing.  Our thoughts are constrained by our languages, but at the same time, our thoughts are guided by our languages.  The ideas form in our heads and rapidly coalesce into some structure that is based upon our language, and the structure works because of the language--it tells us what works and what doesn't work to articulate our ideas.

I think that a pattern language would have the same power.  If we get the patterns into our heads, then when we're faced with articulating a solution to a problem, we will think in terms of the patterns.  The patterns will give form to our solution, and because they are patterns, the solution will work.  The pattern language will both guide and shape our thinking towards things solutions that have the quality without a name.

But then, as Alexander says of "the kernel," once we master the language, we move beyond it, so to speak.  The language is not an end in itself but a means to an end, a means to learn the timeless way.  It shapes our thinking to the extent that we are able to perceive the way even without a pattern.  And this is the superlative value in patterns that I think we're missing. 

Patterns, in themselves, have value, but as many have noted, they can be abused and misapplied.  The reason for this is not that a pattern (or patterns in general) are bad but that we're using them as an end in themselves.  If we simply let patterns shape the way we think about designing software, if we let them become a language, then we will learn to use them in ways that make sense and ultimately go beyond them and build great software even where a pattern doesn't exist.

So how do we do this?  Well, I think to some extent, we already do it.  I think there are people who use the language, who know the way, without necessarily being conscious of it.  And I think that there is a lot of great guidance out there that in a roundabout way does lead to building great software, even though it may not be conscious it is using patterns as a language.  But I do tend to think that there is far more bad or failed software out there that has come about because the language is not known, it is not explicit.

I think that what we need to do is to continue identifying patterns as best we can, but we need to start thinking about how to more firmly incorporate them into how we create software.  In fact, I think doing this, attempting to incorporate patterns more into development, will drive the further identification of patterns, to fill out patterns where we are lacking.  I also think it will help us to realize how patterns relate to each other, which is a big part of using them as a language and not just a bunch of monads floating about in the ether.  As we see them relating, see how they work together to form complete solutions, we'll better understand the language as well as the value of the language, and ultimately, we'll be able to impart that language to enable more of us to speak it.

This calls for those who build great software, who theoretically already know the way, to be introspective and retrospective.  It's not just a matter of looking about in the software world for repeated, similar solutions.  It's about identifying good solutions, solutions that bring software to life, not just addressing functional requirements, and forming from those solutions a language of patterns for building such software.  What do you think?

Notes
1. See Notes on the Notes of the Synthesis of Form and The Timeless Way is Agile.

Sunday, December 09, 2007 2:46:34 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [2]  | 
# Monday, October 01, 2007

Previously, I mentioned I was working on an example of using Visual Studio to create a concrete domain model using object thinking, and here it is.  The domain I ended up modeling was that of a shared event calendar, including event registration and agenda planning.  This is something that's been kind of rolling in and out of my mind for quite a while now because it seems that we need a good system for this for all the code camps and like events that occur.  Of course, lately I've come across a few solutions that are already built1, but it seemed like a domain I knew enough about that I could take a whack at modeling it on my own.  I also figured it was small enough in scope for a sample.

So without further ado, I present you with the domain model:
Click to See Full Event Calendar Domain Model

I put this together in about an hour, maybe an hour and a half, on the train up to SD Best Practices.  When I started out modeling it, I was actually thinking more generally in the context of a calendar (like in Outlook), but I transformed the idea more towards the event planning calendar domain.  So you see some blending of an attendee being invited to a meeting with the event planning objects & behaviors (agenda, speaker, etc.).  Interestingly, they seem to meld okay, though it probably needs a bit of refactoring to, e.g., have an Attendee Register(Person) method on the Event object.

So the interesting thing to see here, contrasting it to the typical model you see in the .NET world (if you're lucky enough to see one at all!), is that there is pretty much no data, no simple properties or attributes, in the model.  The model is entirely objects and their behaviors and relationships to other objects.  You can look at this model and get a pretty darn good feel for the domain and also how the system functions as a whole to serve this domain.  I was able to identify and model the objects without once thinking about (and getting distracted with) particular data attributes.2

In the story of our Tangerine project, I describe in some depth the compromise I had to make with the .NET framework when it comes to data properties.  I think if I were to continue with this event calendar project, after I had nailed down the objects based on their behaviors (as begun in this example) and felt pretty good that it was spot on, at that point, I'd think about the data and do something like I did on Tangerine, having the open-ended property bag but also adding strongly-typed properties as needed to support framework tooling.3 

I hope you can imagine how you could sit with your clients or whoever your domain experts are and quickly map out a lightweight model of the domain using the VS Class Designer DSL.  I'll wager that if we took this diagram and showed it to a non-technical person, with a little help (maybe adding a key/legend), they'd quickly understand what's going on with the system.  And if you're building it with the domain expert, you'll have that dialog done already so that everyone will be on the same page.

Sure, there will be further refinement of both the domain model and the code; the nice thing about using the class designer DSL is that tweaking the model tweaks the code, so the two stay in sync.  We already mentioned the need to focus on the data at some point, and depending on your situation, you can do this with the domain experts or maybe you'll have an existing data model to work with.  As the developer, you're going to want to get in there and tweak the classes and methods to use best coding and framework practices, things that aren't best expressed in such a model.  You will have other concerns in the system to think about like security, performance, logging, user interface, etc., but that's all stuff you need to do regardless of how you approach analyzing and modeling your domain. 

In the end, you will have a fairly refined model of the domain (or part of the domain) that is using a language that everyone gets and agrees on (Eric Evan's "ubiquitous language"); you'll have identified the objects in the domain accurately based on their behaviors and relationships, and you'll even have a starting point in code for the implementation.  You also have objects that are responsible and that collaborate to get the job done, so in that way you avoid code complexity by reducing imperative control constructs.  All in all, it seems like a great foundation upon which to build the software.

Notes
1. Such as Microsoft Group Events, Community Megaphone, and Eventbrite.
2. Okay, so maybe I was tempted once or twice, but I fought the urge. :)
3. I suppose another option would be to create LINQ-based DTOs; I have to think more about how best to meld this kind of domain modeling with LINQ.

Monday, October 01, 2007 4:59:37 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Saturday, September 29, 2007

I finally got around to finishing The Timeless Way of Building, by Christopher Alexander (most well known in software for being the source of the patterns movement).  The last part of the book is called "The Way" to build things.  His focus is physical architecture, but it is interesting how closely it resembles agile software development.

There are a few similarities that I see.  First, he advocates (or at least shows in his example) working directly with the folks who are going to be using the building(s) when designing it with the pattern language.  You design it together with them.  Similarly, agile seems to advocate the same process of working as closely as possible with those who will be using the system.1

But Alexander goes on to say, using this real-world example of a health care complex he helped to build, that it almost failed (in terms of having the quality without a name) because even though it was initially designed using the pattern language, it was in the end passed off to builders who conformed the design to "drawings" (think UML) that ultimately caused it to lose a large amount of the quality

The point he goes on to make is that you can't just use the language up front and then go translate it into formal design techniques and end up with the quality.  Rather, you have to build using the language, and in particular, build each part of the structure piecemeal, to best fit its particular environment, forces, context, and needs.  This is the only way that you can get the quality.  Here I see another similarity with agile and its focus on iterations and regular feedback.  You build in pieces, adapting each piece to its whole as it is built and ensuring that it best fits the needs, context, forces,  and environment. 

He also says that invariably our initial ideas and designs for a solution don't exactly reflect the ways in which the solution will be used.  And this disparity between our design and reality gets worse as the solution grows in scope.  Again, this is true in software and why the regular feedback is important, but Alexander proposes repair as a creative process in which we better adapt the solution to its environment based on deepening understanding of needs or when the design just isn't working or breaks.  This is akin to what we call refactoring, and like we do in software, Alexander advocates a continual process of repair (refactoring).  And this process doesn't stop when the initial thing is built--we keep tweaking it ad infinitum.

This seems somewhat intuitive, yet in software we're always talking about legacy systems and many have and continue to suggest "rewrites" as the answer to software woes.  While I understand that this is one area where software differs from real-world building (in the relative ease that something can be redone), I do think that we software folks tend to err too much on the side of rewriting, thinking that if only we can start from scratch, our new system will be this glorious, shining zenith of elegance that will last forever. 

It is this thinking, too, that even causes many of these rewrites to fail because so much time is spent trying to design a system that will last forever that the system is never completed (or becomes so complex that no one can maintain it), providing the next impetus for another "rewrite of the legacy system."  On the contrary, some of the best software I've seen is that which has simply been continuously maintained and improved, piece by piece, rather than trying to design (or redesign) an entire system at once. 

What is interesting to me in all this is the similarities between the process of building physical structures and that of building software, the general applicability of Alexander's thought to the creation of software.  I continually see this in Alexander's writing.  In part, it is good to see a confirmation of what we've been realizing in the software industry--that waterfall just doesn't work, that pre-built, reusable modules don't really work well, that we need regular, repeated input from stakeholders and users, that we shouldn't try to design it all up front, that we shouldn't use formal notations and categories that create solutions that fit the notations and categories better than their contexts, environments, and needs, that we should create and use pattern languages that are intelligible by ordinary people, and more.

There is one last observation I'd make about The Timeless Way of Building, regarding the "kernel of the way."  Alexander says that when it comes down to it, the core (the kernel) of the timeless way of building is not in the pattern language itself (the language is there to facilitate learning the timeless way); he says the core is in building in a way that is "egoless." 

In some ways, I think the concern about ego is less pronounced in the software world--rarely is a piece of software admired as a piece of art--but at the same time, the underlying message is that you build something to fit just so--not imposing your own preconceptions on how the thing should be built.  For software developers, I think the challenge is more in learning to see the world for what it is, to really understand the problem domain, to look at it through the eyes of the users and design a solution to fit that rather than trying to foist the software worldview onto the users.  To put it another way, we need to build software from the outside in, not the inside out.  The timeless way is really about truly seeing and then building to fit what you see.

Notes
1. At this point, another interesting thought occurs to me about pattern languages; I see a relation to Eric Evan's "ubiquitous language" in that the language you use needs to be shared between the builders and those using the thing being built.  What stands out to me is the idea of building a pattern language that is intelligible enough by non-software experts to be incorporated into the ubiquitous language shared by both the domain experts and the software experts.  Software patterns vary on this point; some are intelligible and some are not so intelligible; we need to make them intelligible.

Saturday, September 29, 2007 9:31:12 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, September 21, 2007

As I sit here on the train home, I've been thinking (and writing) about a lot of stuff.  But I figured I should put this post together for completeness and finality, even though I only made it to one session today before I left  early.  Last night I was shocked and somewhat dismayed to find that I had somehow managed to book the train for return on Saturday afternoon rather than today.  I looked at my reservation email, thinking surely the ticket was misprinted, but nope, the reservation says the 22nd clearly in black and white.

Now, those who spend much time with me know that I tend to be sort of the absent-minded professor type.  I often have trouble keeping track of the details of day-to-day things (but I can tie my shoes!).  I like to think good reasons for this, but whatever the reasons, that's me.  So I can totally imagine that somehow I tricked my brain into thinking that the 22nd was the day I wanted to return when I booked the train.

That said, I think this is a good opportunity to observe a way in which the UX of the reservations system could be improved.  If it had simply said somewhere that the return trip was on SATURDAY and not just used these obscure things called numeric dates, I'd immediately have seen and avoided my mistake.  But nowhere online nor in the email nor on the ticket does it say Saturday.  In fact, there is SO MUCH GARBAGE on the ticket, that the non-initiate has trouble finding anything of value.  So think about that if you're designing some sort of booking system--show the day of the week, please. :)

Lean Process Improvement
So this morning, on top of being tired because I stayed up late writing, I was late for the class I wanted to attend, one called Agile Architecture.  Unfortunately, it was in the smallest room in the conference (same one as the manager meeting yesterday), and unfortunately, the planners didn't anticipate attendance to that session correctly.  Plus, this room had this odd little old lady who felt it was her duty to prevent anyone from attending who had to stand. 

Yesterday, I watched her try to turn away (a few successfully) quite a few folks, even though there was plenty of room on the far side to stand.  She kept saying "there really is no room," but there was.  What made the whole scene kind of comical was that she refused to go sit OUTSIDE the door, so rather than simply preventing folks from coming in and causing a distraction, she let them come in, then animatedly tried to convince them to leave, causing even more distraction.

Well, when I peeked in the door this morning, saw the full room and saw her start heading toward me, I knew I was out of luck.  I just didn't have the heart to muscle by her and ignore her pleading to go stand on the other side, and besides, I don't like standing still for 1.5 hours anyway.  So I was off to find an alternative.

I knew there wasn't much else I wanted to see during that hour, but by golly I was there and this was the only slot I could make today, so I was going to make it to a session!  After two more failed entries into full sessions and studiously avoiding some that sounded extremely dull by their titles, I finally found one that sounded nominally interesting and had a lot of open space.  I really had no clue what I was getting into...

It ended up being somewhat interesting.  It was about applying the "lean process" from the manufacturing space to software development.  I'm personally not really into process and methodologies, particularly when they come from disciplines that are only marginally like our own.  But this did sound like it could be useful in some situations, particularly in software product (i.e., commercial) development. 

He talked about value stream mapping, which is basically modeling the process flow of specific activities in product development from beginning to end (so you'd do one for new feature dev, one for enhancements, one for hot fixes, etc.).  It sounds like it does have potential to be useful as long as you don't spend too much time on it.  Particularly if you think you have a problem in your process, this method can help you to both visualize and identify potential problems.  If you do product development, it's worth a look.

Final Thoughts
After that session, I made off to go to the 12:05 mass at the chapel outside the convention center.  My deacon friend had let me know about it, and I was glad of it.  And he was there, so after mass, we went back into the conference to grab lunch together.  Talked more about the usual, and then I had to run off to catch my train.

Looking back, I feel that this is definitely a conference worth attending.  Of course, your mileage will vary.  I wouldn't come here to go to a bunch of sessions on topics you're already an expert on.  But the nice thing about this conference over others I've been to is that it really is focused on best practices.  It's not really focused much on technology-specific stuff (though there was a bit of that), so you can derive value whether you do Java, C/C++, .NET, or whatever. 

Also, it is a good place to come to meetings of minds from other technology experts, so you get some more exposure than you might normally to how folks are doing software outside of your technological community.  And one interesting thing I noticed is that there is a tangible presence of software product developers, and that's a different and valuable perspective for those who are more used to, say, standard custom/consulting/corporate IT software.

Overall, if you look over the sessions and see topics that you haven't had a chance to explore in depth or maybe you want to just get exposed to other ideas in the software space, this seems like a good conference for that.  I really enjoyed it.

Friday, September 21, 2007 4:27:11 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, September 20, 2007

Today I stumbled into Barnes & Noble (because it had the nearest Starbucks), wandered into the notebook section, and was reminded that my current Moleskine notebook was almost full.  Silly me, I still have two back at the office, so I thought it must be fate for me to go ahead and restock while I'm here.  I highly recommend Moleskine; I like the small, book-like ones without lines because small is convenient enough to put in pocket and I don't like to conform to lines or have even the suggestion that I should, but they have all kinds.  Good, tough little notebooks, and supposedly they've been used by some famous people.  This has not been a paid advertisement for Moleskine.  Now we return you to your regular program.

Applying Perspectives to Software Views (Cont'd)
Yesterday I talked about Rebecca Wirfs-Brock's session on software views.  There's a lot more to what she said than what I communicated, but I'm just propounding what stuck with me.  Looking at my notes, I forgot to mention another key thing, which is that you should model these views and model them in a way that effectively communicates to the stakeholders that their needs are being addressed.  She threw up some UML diagrams, commenting that they're probably not good for most business folks.  (I think UML is not good for most technical folks either, but I'm a rebel like that.)  The point she made, though, was regardless of what notation you use, provide a key that let's people know how to unlock the meaning of the model.  Good point for sure.

Actually, this reminds me of Beautiful Evidence, by Edward Tufte.  I recommend Tufte for his area of expertise, though I'd suggest skipping the chapter on Powerpoint (which sadly was released as a separate booklet) because it's not his area of expertise and it shows.  Anyways, when he is sticking to the realm of visual communication, he is excellent, and Beautiful Evidence is a pretty easy read that helps you start thinking about how to communicate "outside the box" as it were.  I bring it up here because applying his ideas in the area of modeling software, particularly to non-technical audiences, is something we should explore.

Now, back to Day III.

Software Managers
The first session I made it to kind of late (and it was absolutely packed--standing room only) was a session on tips for being a good technical/software manager.  Having become one of these this year, it is definitely a subject of interest, and I'm always on the lookout for more tips, though I must say that I think management books (as a rule) are really bad about regurgitating each other.  You get to where it becomes increasingly hard to find new, good insights the more you read them.

But I thought this session would be good since it is specifically focused on managing technical teams.  Some of her points were standard managerial stuff, but it was nice to have it focused in on the IT industry.  I always end up feeling a bit guilty, though, because I know I've already made numerous faux pas (not sure how to pluralize that).  I hope my guys know I love them even though I screw up being a good manager at times. :) 

One recurring theme I keep coming across is having regular 1-1s with your peeps.  I've heard weekly and bi-weekly, but it seems like both of those would be overkill for my group since we have daily meetings, often go out to lunch, etc., so I'm going to try monthly first.  It'll be better than nothing! 

I have to say that managing well is at lot harder than I expected it to be.  For those of us who aren't natural people persons, it is definitely an effort.  I'm sure it is tough regardless, but I gotta think that it'd be easier if I was naturally more a people person.  Anyways, I keep tryin' for now at least.

Designing for User Success
Went to another Larry Constantine session around UX.  This one was really good.  He, like Patton, affirmed that "user experience is about everything."  Again, it's nice to know I'm not crazy, and it takes a burden off me knowing that I won't be a lone voice crying out about that.  It seems that maybe just those who don't know anything about UX think it is "just another term for UI."  Of course, these "UX professionals" are naturally focused in on their areas of expertise (usability, information architecture, human factors, human-computer interaction, visual design, interaction design, etc.), so maybe I'm still a bit odd in my contention that architects must be the chief experience officers on their projects.

Anyhoo, this session focused in on "user performance" as a distinct focus, meaning that you are providing the tools to get the best performance out of people.  Though none of the session was spent explicitly justifying the importance of a focus on UX, implicitly the whole session was an illustration of why it is important.  I have a ton of good notes from this session, but I won't bore you with them (you can probably get most of it from his slides or other presentations he's done).  If you get nothing else, though, it's to change the way you think about designing software--design from the outside in.  If you're a smart person, you'll realize this has huge implications.  And also, recognize that you won't make all parts of your system perfectly usable, so prioritize your usability efforts based first on frequency of use and second on severity of impact (i.e., those things that will have serious ramifications if not done correctly).

Human Factors in API Design
The next session I hit was one related to UX for developers.  Here are some salient one-liners:

  • Consistency is next to godliness.
  • API = Application Programmer Interface
  • When in doubt, leave it out. <-- More specifically, unless you have at least two, real use cases, don't stick it in your API.
  • Use the Iceberg Principle. <-- This means what people see of your code should only be the tip of the iceberg--keep it small, simple, and focused.

This session actually seemed to be a blend of general UX guidelines (yes, they apply here, too, not just on end-user interfaces) and more general framework design principles that only had varying degrees of pertinence to ease of use.  Some highlights:

  • Default all members to private; only raise visibility with justification.
  • Prefer constructors to factory/builder pattern, and setup object fully with constructor where possible.
  • Use domain-specific vocabulary.
  • Prefer classes to interfaces.  Amen!
  • Prefer finality (sealing) to inheritance--minimize potential for overriding.

There's a good deal more, and I'm not offering the justification he proposed (for brevity's sake).  I agree to varying levels of vehemence with most of what he said, but one area where I think I have to disagree is his advice to only refactor to patterns.  I can imagine where this comes from--because patterns can be abused (paternitis as he said).  But I think saying refactor to patterns shows a big misunderstanding of the point and value of patterns.  This is why it's important to pay attention to the context and rationale in a pattern--so you know when to apply it.  But patterns should be used where they apply--they're known, established, tried and true ways of solving particular problems in particular contexts!  If consistency is akin to godliness, using patterns is ambrosia.

One last interesting note from this session was the admonition to consider using or creating a domain-specific language where it helps with the usability of the API.  His example was around JMidi and JFugue, where JMidi is a terribly verbose API, requiring the construction of and coordination of a host of objects to do something simple like play a note, JFugue offers a simple string-based DSL that is based off of musical notation to let you place a whole series of notes very compactly.  Good/interesting advice.

Pair Programming
The last session I went to today was one based on practical pair programming.  I was actually on my way to a class on Business Process Modeling Notation, which would have been potentially more intellectually stimulating, but I walked by the room with the Pair Programming session on it and had a sudden feeling I should attend it.  When I thought about it, I figured that I'd put off giving the idea fair play long enough and that I should take the time to hear it in more depth.  I figured it'd have more immediate relevancy to my current work situation in any respect.

I won't belabor all the points because I suspect with good reason that they're all the standard arguments for pair programming along with a good bit of the "how" to do it in real situations.  He actually has a number of patterns and anti-patterns to further illustrate good/bad practices in pair programming.  It was an interesting extension of the pattern-based approach (to people).  Suffice it to say, I think if you can get buy in in your organization it is definitely worth a try.  There are numerous difficulties with it, chief one being it is hard to do effectively in a non-co-located environment, but I think I'd try it given the opportunity. 

Random Thoughts
One thing that I've come to the conclusion on being here is that TDD seems to be unanimously accepted by those who have actually tried it as a best practice.  The API guy went so far as to say that he won't hire devs who don't have TDD experience.  (I think that's a bit short-sighted, but I take his point.)  It's something to think about for those still hesitating to adopt TDD.

I met up again with the same fella I met last night.  We were both in the pair programming class at the end of the day; he's been doing pair programming on a few teams at his company for years and is a fan, though he definitely attests to the difficulty of dealing with prima donnas, which apparently are more tolerated in his LOB (because they have very specialized knowledge that requires PhD level education).  So he wasn't able to carry XP to his entire company.  He also said that pairing (which was echoed by the presenter) is a taxing process; 4-5 hours max is good.

We also had a good long chat about things Catholic.  It's good to know that we Catholics will be getting another good, solid deacon in him.  I imagine tonight won't be the last time we talk.

All in all, another great day.  Learned a bunch.  No sessions I regret going to thus far, which is I think a big compliment for a conference. :)

Thursday, September 20, 2007 10:48:34 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, September 19, 2007

Hi again.  Today was another good day at the conference. 

User Experience Distilled

The first class I attended was a whirlwind tour of user experience.  I was heartened to learn that I am not alone or crazy in recognizing that there are a number of disciplines that go into this thing we call UX, and the presenter, Jeff Patton, also recognizes that actually virtually every role in developing software has an effect on UX, which is also something I have come to the conclusion of (as I hint at on IG's UX area).  I develop the idea more explicitly in an unpublished paper I'm working on.  (I'm hoping the inputs I get from this conference will help me to finish that out.)  

I actually think that all of this UX stuff falls under the architect's purview because (in my mind at least) he or she is primarily responsible for designing the software as a whole.  This means that architects need to have a conversational familiarity (at least) with the different disciplines that people traditionally think of as user-oriented disciplines, but I'd take it a step further and say that the architect needs to be the chief experience officer, as it were, on a software project.  The architect needs to ensure that the appropriate expertise in user-oriented disciplines is brought to bear on his or her project and also needs to understand how the other aspects of software design and development impact UX and optimize them for good UX. 

That discussion aside, Jeff had a pretty clever graph that showed how the kind of software being developed affects the perceived ROI of expenditure on UX.  His talk also was about as effective an introduction to UX that I can imagine.  He dealt with what it is, why it's important, and then offered a high-level overview of key bits of knowledge for people to make use of.  I want to steal his slides! :)

Global Teams & Outsourcing Agilely

The keynote during lunch today was done by Scott Ambler.  It was nice to finally see/hear him in person since I've heard so much about him.  I got the feeling (from what he even admitted) that he was presenting stuff that wasn't just his--he was from what I could tell presenting an overview of a book that IBM publishes (related) on the subject.  But that didn't take away from the value of the knowledge by any means.  I'd definitely check it out if you're going to be dealing with geographically distributed teams.

Usability Peer Reviews

In my continuing quest to learn more about UX (part of which is usability), I attended a class by Larry Constantine about lightweight usability practice through peer review/inspection (related paper).  I was actually surprised because he has a very formal methodology for this, which means he's put a lot of thought into it but, more importantly, he's used it a lot in consulting, so it is tested.  I personally am not a big fan of being too formal with these things.  I understand the value in formalizing guidance into repeatable methodology, but I've always felt that these things should be learned for their principles and less for their strictures.  Of course, that runs the risk of missing something important, but I guess that's a trade off.  Regardless of if you follow it to a T or not, there's a ton of good stuff to be learned from this technique on how to plug in usability QA into the software process.

Applying Perspectives to Software Views

After that, I slipped over to another Rebecca Wirfs-Brock presentation on applying perspectives to software views in architecture.  (She was presenting the subject of this book.)  To me, the key takeaway was that we should figure out the most important aspects of our system and focus on those.  It echoed (in my mind) core sentiments of domain-driven design, though it used different terminology and approach.  I think the two are complementary--using the view approach helps you to think about the different non-functional aspects.  Using strategic DDD (in particular, distilling the domain) helps you and stakeholders to focus in on the most important aspects of the system from a domain strategy perspective, and that will inform which views and perspectives are the ones that need the focus. 

This approach also echoes the sentiment expressed by Evans yesterday that says you can't make every part of the system well-designed (elegant/close to perfection).  Once you accept that, you can then use these approaches to find the parts of the systems where you need to focus most of your energies.  I really like that this practical truth is being made explicit because I think it can help to overcome a lot of the problems that crop up in software development that have to do with the general idealistic nature that we geeks have.

Expo

After the classes today, they had the expo open.  In terms of professional presentation, it was on par with TechEd's Expo, but certainly the scope (number of sponsors) was far smaller.  That said, I popped into the embedded systems expo.  That was a new experience for me.  It was interesting to see almost every booth with some kind of exposed hardware on display.  As a software guy, I tend to take all that stuff for granted.  They even had a booth with specialized networked sensors for tanks of liquid.  This stuff stirred recollections of weird science and all the other fun fantasies that geeky kids have about building computerized machines.  The coolest thing there was the Intel chopper, which apparently was built by the Orange County Chopper guys, but it had a lot of fancy embedded system stuff on it.  I didn't stick around to hear the spiel, but it was pretty cool.

After the expo, I bumped into a guy at Cheesecake factory.  We started chatting, and it turns out that he's in the process of becoming a Roman Catholic deacon.  Pretty cool coincidence for me!  We talked about two of my top passions--my faith and software development (as exemplified here on dotNetTemplar!).  It was a good dinner.  He works at a company that does computer aided engineering; sounds like neat stuff with all that 3D modeling and virtual physics.  Way out of my league!

As I said, another good day here at SD Best Practices.

Wednesday, September 19, 2007 9:54:31 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 

I meant to write this last night, but I didn't get back to my room till late and just felt like crashing.  I'm at the SD Best Practices conference in Boston this week, which is a new experience for me.  It's one of a very few non-MS-oriented conferences I've attended, and I really wanted to come because best practices are a passion for me (and part of my job).  Infragistics was kind enough to send me.  I thought I'd share my experiences for anyone else considering going (and just for my own reference.. hehe)  Anyways, enough of the intro...

Day 1 - Tuesday, 18 September 2007

First off, let me say I like the idea of starting on a Tuesday.  It let me work for a good part of the day on Monday and still make it out here by train on Monday night.  I've found in the past that attending sessions non-stop for a few days can really wear you out, so four days seems about right.

The conference is in the Hynes convention center, and I'm at the Westin, a stone's throw away.  Also, it's right next to the Back Bay Station, so thus far the logistics aspect has worked out quite well for me.  I'd personally much rather take a train over a plane anytime. 

Responsibility-Driven Design

Tuesday was a day of "tutorials," which are half-day sessions.  So in the morning, I attended Rebecca Wirfs-Brock's tour of responsibility-driven design (RDD?).  I actually had her book at one point because it was mentioned in a good light by Dr. West in his Object Thinking, but somewhere along the line I seem to have lost it.  Anyways, I was glad to get a chance to learn from the author directly and to interact. 

From what I can ascertain, RDD has some good insight into how to do good object design.  It seems to me that thinking in terms of responsibilities can help you properly break apart the domain into objects if you struggle with just thinking in terms of behavior.  It's potentially easier than just thinking in terms of behaviors because while behaviors will certainly be responsibilities, objects can also have the responsibility to "know" certain things, so it is a broader way of thinking about objects that includes their data.

That said, it doesn't really negate the point of focusing on behaviors, particularly for folks with a data-oriented background because I do think that focusing on the behaviors is the right way to discover objects and assign them the appropriate responsibilities.  I think the key difference is that with the object-thinking approach, you know that there will be data and that it is important to deal with, but you keep it in the right perspective--you don't let it become the focus of your object discovery.

Another beneficial thing I think Ms Wirfs-Brock has is the idea of using stereotypes as a way to discover objects in the domain.  This is more helpful, I think, when dealing with objects that are more part of the software domain than those in the business domain because the stereotypes are very software-oriented (interfacers, information holders, etc.). 

In terms of process, she advocates this idea of having everyone on a team write their thoughts down about the problem being faced in a few sentences, focusing on what seems like it'll be a challenge, what will be easy, what you've run into before, etc.  Then have everyone bring those to the initial design meetings.  I like the idea because it bypasses the introvert-extrovert problem you sometimes get in meetings and you can start out with a lot of ideas to really jump sta

rt the design.  It's a good way to ensure you don't miss out on ideas due to personality issues.

The other thing I like in her process is writing down a purpose statement for objects as you discover them and thinking of them as candidates.  This is part of the CRC card process (the first C is now "candidates").  The reason I like it is that it helps you to focus on the point of the object and sort of justify its existence, which can help weed out some bad ideas. 

What I don't like about the process is the overall CRC card idea.  While it surely is more lightweight than many ways to approach object design, you still end up with a bunch of paper that you then have to translate into code at some point.  I much prefer to use a tool that will literally be creating the code as I design.  I've found the VS class designer serves this purpose quite well.  In fact, on the way up here, I spent some time doing up a sample class diagram using the object thinking approach to share as an example of domain modeling.  I'll be sharing it soon, but I just mention it to say this is not just speculation.  It was actually very lightweight and easy to discover objects and model the domain that way, and at the end I had literal code that I can then either fill out or hand off to other devs to work on who can then further refine it.

Domain-Driven Design

The second session I attended was one by Eric Evans on strategic domain-driven design.  Eric wrote a book on the subject that's been well received by everyone I've encountered who spent time with it.  I've seen a presentation on it, and I've read parts of Jimmy Nillson's Applying Domain-Driven Design and Patterns book.  So I thought I was acquainted well enough with the ideas, but as I often find to be the case, if you rely on second-hand info, you'll inevitably get a version of the info that has been interpreted and is biased towards that person's point of view.

For instance, most of what I've seen on DDD is focused on what Eric calls "tactical" DDD, i.e., figuring out the objects in the domain and ensuring you stay on track with the domain using what he calls the "ubiquitous language."  Eric presented parts of his ideas yesterday that he calls "strategic" because they are more geared towards strategic level thinking in how you approach building your software.  Two key takeaways I saw were what he calls context mapping, which seems to be a really effective way to analyze existing software to find where the real problems lie, and distilling the domain, which is a way to really focus in on the core part of a system that you need to design.

In short (very abbreviated), he claims (and I agree) that no large system will be completely well designed, nor does it need to be.  This isn't to say you're sloppy but it helps you to focus your energies where they need to be focused--on the core domain.  Doing this actually can help business figure out where they should consider buying off-the-shelf solutions and/or outsourcing as well as where to focus their best folks.  It's a pretty concrete way to answer the buy vs. build question.

Anyways, I'm definitely going to get his book to dig in deeper (it's already on the way).  Please don't take my cliff's notes here as the end of your exploration of DDD.  It definitely warrants further digging, and it is very complementary to a good OOD approach.

After all this, I was privileged enough to bump into Eric and have dinner, getting to pick his brain a bit about how all his thinking on DDD came together, his perspectives on software development, and how to encourage adoption of better design practices (among other things).  Very interesting conversation, one that would have been good for a podcast.  I won't share the details, but I'm sure folks will eventually see some influence this conversation had on me.  Good stuff.

Software for Your Head

I almost forgot about Jim McCarthy's keynote.  I've only seen Jim twice (once in person and once recorded).  He's a very interesting and dynamic speaker, which makes up for some of the lack of coherence.  I find the best speakers tend to come across a bit less coherent because they let speaking become an adventure that takes them where it will.  But I do think there was definitely value in his message.  I tend to agree that he's right in asserting that what we all do on a daily basis has a larger impact on humanity than we realize, and I can't argue with his experience in building teams that work.  http://www.mccarthyshow.com/ is definitely worth a look.

Overall, Tuesday was a big success from an attendee perspective.  So far so good!

Wednesday, September 19, 2007 11:17:20 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, September 17, 2007

When searching recently so as to provide further reading for "domain model" in a recent post, I was quite surprised to find that there seemed to be no good definition readily available (at least not by Googling "domain model").  Since I tend to use this term a lot, I figured I'd try to fill this gap and, at the very least, provide a reference for me to use when I talk about it.

So What is a Domain Model?
Put simply, a domain model is the software model of a particular domain of knowledge (is that a tautology?).  Usually, this means a business domain, but it could also mean a software domain (such as the UI domain, the data access and persistence domain, the logging domain, etc.).  More specifically, this means an executable representation of the objects in a domain with a particular focus on their behaviors and relationships1.

The point of the domain model is to accurately represent these objects and their behaviors such that there is a one-to-one mapping from the model to the domain (or at least as close as you can get to this).  The reason this is important is that it is the heart of software solutions.  If you accurately model the domain, your solution will actually solve the problems by automating the domain itself, which is the point of pretty much all business software.  It will do this with much less effort on your part than other approaches to software solutions because the objects are doing the work that they should be doing--the same that they do in the physical world.  This is part and parcel of object-oriented design2.

Nothing New
By the way, this is not a new concept--OO theory and practice has been around for decades.  It's just that somewhere along the line, the essence of objects (and object-oriented design) seems to have been lost or at least distorted, and many, if not most, Microsoft developers have probably not been exposed to it, have forgotten it, or have been confused into designing software in terms of data.  I limit myself to "Microsoft developers" here because it is they of whom I have the most experience, but I'd wager, from what I've read, the same is true of Java and other business developers. 

I make this claim because everyone seems to think they're doing OO, but a concrete example of OOD using Microsoft technologies is few and far between.  Those who try seem to be more concerned with building in framework services (e.g., change tracking, data binding, serialization, localization, and data access & persistence) than actually modeling a domain.  Not that these framework services are not important, but it seems to me that this approach is fundamentally flawed because the focus is on software framework services and details instead of on the problem domain--the business domain that the solutions are being built for. 

The Data Divide
I seem to write about this a lot; it's on my mind a lot3.  Those who try to do OOD with these technologies usually end up being forced into doing it in a way that misses the point of OOD.  There is an unnatural focus on data and data access & persistence.  Okay, maybe it is natural or it seems natural because it is ingrained, and truly a large part of business software deals with accessing and storing data, but even so, as I said in Purporting the Potence of Process4, "data is only important in as much as it supports the process that we’re trying to automate." 

In other words, it is indeed indispensable but, all the same, it should not be the end or focus of software development (unless you're writing, say, a database or ORM).  It may sound like I am anti-data or being unrealistic, but I'm not--I just feel the need to correct for what seems to be an improper focus on data.  When designing an application, think and speak in terms of the domain (and continue to think in terms of the domain throughout the software creation process), and when designing objects, think and speak in terms of behaviors, not data. 

The data is there; the data will come, but your initial object models should not involve data as a first class citizen.  You'll have to think about the data at some point, which will inevitably lead to specifying properties on your objects so you can take advantage of the many framework services that depend on strongly-typed properties, but resist the temptation to focus on properties.  Force yourself to not add any properties except for those that create a relationship between objects; use the VS class designer and choose to show those properties as relationships (right-click on the properties and choose the right relationship type).  Create inheritance not based on shared properties but on shared behaviors (this in itself is huge).  If you do this, you're taking one step in the right direction, and I think in time you will find this a better way to design software solutions.

My intent here is certainly not to make anyone feel dumb, stupid, or like they've wasted their lives in building software using other approaches.  My intent is to push us towards what seems to be a better way of designing software.  Having been there myself, I know how easy it is to fall into that way of thinking and to imagine that simply by using these things called classes, inheritance, and properties that we're doing OOD the right way when we're really not.  It's a tough habit to break, but the first step is acknowledging that there is (or at least might be) a problem; the second step is to give object thinking a chance.  It seems to me that it is (still) the best way to do software and will continue to be in perpetuity (because the philosophical underpinnings are solid and not subject to change).

Notes
1. An object relationship, as I see it, is a special kind of behavior--that of using or being used.  This is also sometimes represented as a having, e.g., this object has one or more of these objects.  It is different from data because a datum is just a simple attribute (property) of an object; the attribute is not an object per se, at least not in the domain model because it has no behaviors of its own apart from the object it is attached to.  It is just information about a domain object.

2. I go into this in some depth in the Story paper in the Infragistics Tangerine exemplar (see the "To OOD or Not to OOD" section).  I use the exemplar itself to show one way of approaching domain modeling, and the Story paper describes the approach.

3. Most recently, I wrote about this in the Tangerine Story (see Note 2 above).  I also wrote publicly about it back in late 2005, early 2006 in "I Object," published by CoDe Magazine.  My thought has developed since writing that.  Interestingly, in almost two years, we seem to have only gotten marginally better ways to deal with OOD in .NET. 

4. In that article, I put a lot of focus on "process."  I still think the emphasis is valid, but I'd temper it with the caveat that however business rules are implemented (such as in the proposed workflow-driven validation service), you still think of that as part of your domain model.  The reason for separating them into a separate workflowed service is a compromise between pragmatism and idealism given the .NET platform as the implementation platform.  I've also since learned that the WF rules engine can be used apart from an actual .NET workflow, so depending on your application needs, just embedding the rules engine into your domain model may be a better way to go than using the full WF engine.  If your workflow is simple, this may be a better way to approach doing validation.

Monday, September 17, 2007 11:41:54 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, September 15, 2007

As I sit here on my deck, enjoying the cool autumn breeze1, I thought, what better thing to write about than Web services!  Well, no, actually I am just recalling some stuff that's happened lately.  On the MSDN Architecture forums and in some coding and design discussions we had this week, both of which involve the question of best practices for Web services.

Before we talk about Web services best practices, it seems to me that we need to distinguish between two kinds of application services.  First, there are the services that everyone has been talking about for the last several years--those that pertain to service-oriented architecture (SOA).  These are the services that fall into the application integration camp, so I like to call them inter-application services. 

Second, there are services that are in place to make a complete application, such as logging, exception handling, data access and persistence, etc.--pretty much anything that makes an application go and is not a behavior of a particular domain object.  Maybe thinking of them as domain object services would work, but I fear I may already be losing some, so let's get back to it.  The main concern within this post are those services using within an application, so I call them intra-application services.

It seems like these latter services, the intra-application ones, are being often confused with the former--the inter-application services.  It's certainly understandable because there has been so much hype around SOA in recent years that the term "service" has been taken over and has lost its more generic meaning.  What's worse is that there has been a lot of confusion around the interaction of the terms Web service and just plain service (in the context of SOA).  The result is that you have folks thinking that all Web services are SO services and sometimes that SO services are always Web services.

My hope here is to make some clarification as to the way I think we should be thinking about all this.  First off, Web services are, in my book at least, simply a way of saying HTTP-protocol-based services, usually involving XML as the message format.  There is no, nor should there be, any implicit connection between the term Web service and service-oriented service.  So when you think Web service, don't assume anything more than that you're dealing with a software service that uses HTTP and XML. 

The more important distinction comes in the intent of the service--the purpose the service is designed for.  Before you even start worrying about whether a service is a Web service or not, you need to figure out what the purpose of the service is.  This is where I get pragmatic (and those who know me know that I tend to be an idealist at heart).  You simply need to determine if the service in question will be consumed by a client that you do not control. 

The reason this question is important is that it dramatically affects how you design the service.  If the answer is yes, you automatically take on the burden of treating the service as an integration (inter-application) service, and you must concern yourself with following best practices for those kinds of services.  The core guideline is that you cannot assume anything about the way your service will be used.  These services are the SO-type services that are much harder to design correctly, and there is tons of guidance available on how to do them2.  I won't go in further depth on those here.

I do think, though, that the other kind of services--intra-application services--have been broadly overlooked or just lost amidst all the discussion of the other kind.  Intra-application services do not have the external burdens that inter-application services have.  They can and should be designed to serve the needs of your application or, in the case of cross-cutting services (concerns) to serve the needs of the applications within your enterprise.  The wonderful thing about this is that you do have influence over your consumers, so you can safely make assumptions about them to enable you to make compromises in favor of other architectural concerns like performance, ease of use, maintainability, etc.

Now let's bring this back to the concrete question of best practices for intra-application Web services.  For those who are using object-oriented design, designing a strong domain model, you may run into quite a bit of trouble when you need to distribute your application across physical (or at least process) tiers.  Often this is the case for smart client applications--you have a rich front end client that uses Web services to communicate (usually for data access and persistence).  The problem is that when you cross process boundaries, you end up needing to serialize, and with Web services, you usually serialize to XML.  That in itself can pose some challenges, mainly around identity of objects, but with .NET, you also have to deal with the quirks of the serialization mechanisms.

For example, the default XML serialization is such that you have to have properties be public and  read-write, and you must have a default constructor.  These can break encapsulation and make it harder to design an object model that you can count on to act the way you expect it to.  WCF makes this better by letting you use attributes to have better control over serialization.  The other commonly faced challenge is on the client.  By default, if you use the VS Add Web Reference, it takes care of the trouble of generating your service proxies, but it introduces a separate set of proxy objects that are of different types than your domain objects.

So you're left with the option of either using the proxy as-is and doing a conversion routine to convert the proxy objects to your domain objects, or you can modify the proxy to use your actual domain objects.  The first solution introduces both a performance (creating more objects and transferring more data) and a complexity (having conversion routines to maintain) hit; the second solution introduces just a complexity hit (you have to modify the generated proxy a bit).  Neither solution is perfectly elegant--we'd need the framework to change to support this scenario elegantly; as it is now, the Web services stuff is designed more with inter-application services in mind (hence the dumb proxies that encourage an anemic domain model) than the intra-application scenario we have where we intend to use the domain model itself on the client side.

If you take nothing else away from this discussion, I'd suggest the key take away is that when designing Web services, it is perfectly valid to do so within the scope of your application (or enterprise framework).  There is a class of services for which it is safe to make assumptions about the clients, and you shouldn't let all of the high-falutin talk about SOA, WS-*, interoperability, etc. concern you if your scenario does not involve integration with other systems that are out of your control.  If you find the need for such integration at a later point, you can design services (in a service layer) then to meet those needs, and you won't be shooting yourself in the foot trying to design one-size-fits-all services now that make so many compromises so as to make the app either impossible to use or very poorly performing.

My own preference that I'd recommend is to use the command-line tools that will generate proxies for you (you can even include a batch file in your project to do this) but then modify them to work with your domain model--you don't even need your clients to use the service proxies directly.  If you use a provider model (plugin pattern) for these services, you can design a set of providers that use the Web services and a set that talk directly to your database.  This enables you to use your domain model easily in both scenarios (both in a Web application that talks directly to the db as well as a smart client that uses Web services). 

It requires a little extra effort, but it means you can design and use a real domain model and make it easier easier to use by hiding the complexity of dealing with these framework deficiencies for consumers of the domain model.  This is especially helpful in situations where you have different sets of developers working on different layers of the application, but it is also ideal for use and reuse by future developers as well.

One of these days, I'll write some sample code to exemplify this approach, maybe as part of a future exemplar.

Notes
1. The weatherthing says it's 65 degrees Fahrenheit right now--at 1pm!
2. My observation is that it is safe to assume that when other people talk about services and Web services, these are the kind they're thinking of, even if they don't make the distinction I do in this post. 

Saturday, September 15, 2007 6:00:03 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, August 14, 2007

Thanks to a sharp co-worker of mine, I was recently introduced to "Magic Ink: Information Software and the Graphical Interface," by Bret Victor.  It was quite an interesting read; Victor makes a lot of good points.  For instance, he suggests that we should view information software as graphic design, i.e., taking the concerns of traditional graphic design as paramount and then taking it to the next level by availing ourselves of context-sensitivity, which he defines as inferring the context from the environment, history, and, as a last resort, interaction.

Minimizing Interaction

The thrust of the argument is around reducing interaction and making software smarter, i.e., more context aware and, eventually, able to learn through abstractions over learning algorithms.  I think we can all agree with this emphasis, but I do think he unnecessarily latches onto the term "interaction" as a bad thing, or rather, I think he presents "interaction design" in an overly-negative light. 

True, the smarter we can make computers (and consequently require less interaction from users) the better, but that doesn't negate the usefulness of interaction design, human factors, information architecture, and usability.  There are many, valuable things to be learned and used in all of these interaction-oriented fields, and we shouldn't deride or dismiss them because they focus on interaction.  I felt that Victor's negative emphasis on this and his speculating that why software sucks in relation to this took away from the value of his overall message.

The Problem of Privacy

There is one problem that I don't think he addressed in terms of increasing environmental context awareness, and that is security, specifically, privacy.  It is tempting to think about how wonderful it would be for a computer to know more about our environment than us and thus be able to anticipate our needs and desires, but in order to do this, we, as humans, will have to sacrifice some level of privacy.  Do we really want a totally connected computer to know precisely where we are all the time?  Do we really want it to be "reporting" this all the time by querying location aware services?  Do we really want a computer to remember everything that we've done--where we've been, who we've interacted with, when we did things?

I think the trickier issues with context awareness have to do with questions like these.  How do we enable applications to interact with each other on our behalf, requiring minimal interaction from us, while maintaining our privacy?  How does an application know when it is okay to share X data about us with another application?  Do we risk actually increasing the level of interaction (or at least just changing what we're interacting about) in order to enable this context sensitivity? 

If we're not careful, we could end up with a Minority Report world.  People complain about cookies and wire taps, the world of computer context-sensitivity will increase privacy concerns by orders of magnitudes.  This is not to negate the importance of striving towards greater context sensitivity.  It is a good goal; we just need to be careful how we get there.

Towards Graphic Design

One of the most effective points he made was in illustrating the difference between search results as an index and search results as a tool for evaluation itself, i.e., thinking about lists of information in terms of providing sufficient information for a comparative level of decision making.    It is a shift in how developers can (and should) think about search results (and lists in general).

Similarly, his example of the subway schedule and comparing it to other scheduling applications is a critical point.  It illustrates the value of thinking in terms of what the user wants and needs instead of in terms of what the application needs, and it ties in the value of creating contextually meaningful visualizations.  He references and recommends Edward Tufte, and you can see a lot of Tufte in his message (both in the importance of good visualizations and the bemoaning of the current state of software).  I agree that too often we developers are so focused on "reuse" that we fail miserably in truly understanding the problems we are trying to solve, particularly in the UI.

That's one interesting observation I've had the chance to make in working a lot with graphic/visual designers.  They want to design each screen in an application as if it were a static canvas so that they can make everything look and feel just right.  It makes sense from a design and visual perspective, but developers are basically the opposite--they want to find the one solution that fits all of their UI problems.  If you give a developer a nicely styled screen, he'll reuse that same style in the entire application.  In doing so, developers accidentally stumble on an important design and usability concept (that of consistency), but developers do it because they are reusing the design for maximum efficiency, not because they're consciously concerned about UI consistency!  It is a kind of impedance mismatch between the way a designer views an application UI and the way a developer does.

The Timeless Way

I'm currently reading Christopher Alexander's The Timeless Way of Building, which I hope to comment on in more depth when done.  But this discussion brings me back to it.  In fact, it brings me back to Notes on the Synthesis of Form as well, which is an earlier work by him.  One of the underlying currents in both is designing a form (solution, if you will) that best fits the problem and environment (context).  The timeless way (and patterns and pattern language, especially) is all about building things that are alive, that flow and thrive and fit their context, and the way you do that is not by slapping together one-size-fits-all solutions (i.e., reusing implementations) but in discovering the patterns in the problem space and applying patterns from the solution space that fit the problem space just so.  The reuse is in the patterns, at the conceptual level, but the implementation of the pattern must always be customized to fit snugly the problem. 

This applies in the UI as well as other areas of design, and that's the underlying current behind both Tufte's and Victor's arguments for the intelligent use of graphic design and visualization to convey information.  You must start by considering each problem in its context, learn as much as you can about the problem and context, then find patterns that fit and implement them for the problem in the way that makes the most sense for the problem.  But more on the timeless way later.

A Good Read

Overall, the paper is a good, thought-provoking read.  I'd recommend it to pretty much any software artisan as a starting point for thinking about these issues.  It's more valuable knowledge that you can put in your hat and use when designing your next software project.

Tuesday, August 14, 2007 10:41:14 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, July 30, 2007

Are you passionate about software development?  Do you love to share your knowledge with others?  Do you like working in a vibrant, fun culture working on the latest and greatest technologies with other smart and passionate people?  If so, I think I may have your dream job right here.

We're looking for another guidisan to help craft guidance using best practices for .NET development.  The word guidisan ('gId-&-z&n) comes from a blending of "guidance" and "artisan," which really speaks to the heart of the matter.  We're looking for software artisans who have the experience, know-how, and gumption to explore strange new technologies, to seek out new applications and new user scenarios, to boldly go where other developers only dream of going in order to provide deep, technical guidance for their colleagues and peers.

What do guidisans do? 

  • Help gather, specify, and document application vision, scope, and requirements.
  • Take application requirements and create an application design that meets the requirements and follows best known practices for both Microsoft .NET and Infragistics products.
  • Implement applications following requirements, best practices, and design specifications.
  • Create supplemental content such as articles, white papers, screencasts, podcasts, etc. that help elucidate example code and applications.
  • Research emerging technologies and create prototypes based on emerging technologies.
  • Contribute to joint design sessions as well as coding and design discussions.

What do I need to qualify?

  • Bachelor’s Degree.
  • 4+ years of full-time, professional experience designing and developing business applications.
  • 2+ years designing and developing.NET applications (UI development in particular).
  • Be able to create vision, scope, and requirements documents based on usage scenarios.
  • Demonstrated experience with object-oriented design; familiarity with behavior-driven design, domain-driven design, and test-driven development a plus.
  • Demonstrated knowledge of best practices for .NET application development.
  • Accept and provide constructive criticism in group situations.
  • Follow design and coding guidelines.
  • Clearly communicate technical concepts in writing and speaking.

If you think this is your dream job, contact me.  Tell me why it's your dream job and why you think you'd be the next great guidisan.

Monday, July 30, 2007 3:01:27 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, April 11, 2007

Christopher Alexander is a noted traditional (i.e., not software) architect who's been writing about design since well before I was born.  Some of his books, most notably A Pattern Language, are the basis of the patterns movement (for lack of a better word) in the software industry.  Anyone who writes on software patterns includes his works in their bibliographies, so I figured there must be something to it.

Not being one to trust others' reductions and paraphrasing any more than I have to, I've been wanting to dig into his work myself for some time.  I finally got around to it in early March.  I've started with Notes on the Synthesis of Form, which seems to be the first book in the series on patterns.

Apart from loving the plain black cover and white block lettering and of course the obscure sounding title, I also enjoyed the innards.  It really is interesting how similar the problems and processes of three-dimensional design and architecture are with those of software design and architecture.

I dare not reduce this work or ask you to depend upon my fuzzy recollections for a precise summary, but what follows is what I recall of the book, those things which made enough of an impression to stick with me at least these few weeks since my reading.

First, I recall the observation that we often only really know the proper form (solution) by recognizing things that are out of place (misfits).  What's interesting about this is how utterly incompatible this is with the idea of waterfall design, i.e., trying to imagine and gather all the requirements of a solution up front.  We simply lack the imagination to create solutions that fit perfectly using the waterfall approach, and the more complex the problem, the more likely this approach is  to fail.

This is in part why agile, iterative development and prototyping works better.  It enables us to create a form (a solution) and see how well it fits against the actual problem.  We can easily recognize the misfits then by comparing the prototype or iteration to the problem and make small adjustments to eliminate the misfits, ultimately synthesizing a much better-fitting form than we could ever imagine up front.

Second, I found the approach to the composition of the individual problems into the most autonomous groups (problem sets) possible to be insightful.  But the key observation here is that this composition should be based in the realities of the problems, not in the preconceived groupings that our profession has set out for us. 

For instance, rather than starting with the buckets of security, logging, exception handling, etc., you identify the actual individual problems that are in the problem domain, group them by their relative interconnectedness, and then attempt to design solutions for those groupings.  The value in this observation lies in keeping us focused on the specifics of the problem at hand rather than attempting to use a sort of one-size-fits-all approach to solving design problems. 

Further, if we take this approach, we will have more success in creating a form that fits because the groupings are along natural boundaries (i.e., areas of minimal connectedness) in the problem domain.  Thus when we create a solution for a set of problems, the chance that the solution will cause misfits in other sets is diminished.

Finally, as we identify these natural sets in the problem domains, we see recurring, like solutions (patterns) emerge that can be generalized to create a sort of rough blueprint for solving those sets of problems.  The patterns are not rote algorithms with no variation or creativity but rather are like an outline from which the author can craft the message using his or her particular genius. 

This avoids the pitfall of the one-size-fits-all solution, provides for competition and creativity, and ultimately has the best chance of enabling designers to create a system of forms that integrate harmoniously and address the actual problems at hand.

And the idea is that these sets are also hierarchical in nature such that one can create sets of sets of problems (and corresponding patterns) to create higher and higher level coherent views of extremely complex problem domains.  This, of course, fits nicely with the way we deal with problems in the software world as well (or in managing people, for that matter), dealing with problem sets and patterns all the way from enterprise application integration down to patterns governing individual instructions to a CPU (or from the C-level management team down to the team supervisors).  What can we say, hierarchies are convenient ways for us to handle complex problems in coherent ways.

So what does it all mean?  Well, I think it in large part validates recent developments in the industry.  From agile development (including test-driven design) to domain-driven design to, of course, the patterns movement itself.  We're seeing the gradual popular realization of the principles discussed in this book. 

It means that if we continue to explore other, more mature professions, we might just save ourselves a lot of trouble and money by learning from their mistakes and their contributions to human knowledge.  It's like avoiding a higher-level Not Invented Here Syndrome, which has long-plagued our industry.  We're a bunch of smart people, but that doesn't mean we have to solve every problem, again!  Why not focus on the problems that have yet to be solved?  It makes no more sense for a developer to create his own custom grid control than it does for our industry to try to rediscover the nature of designing solutions to complex problems.

It also means that we have a lot of work to do yet in terms of discovering, cataloguing, and actually using patterns at all levels of software design, not for the sake of using patterns but, again, for the sake of focusing on problems that have yet to be solved.  I look forward to continuing reading The Timeless Way of Building and to the continued improvements of our profession.

Wednesday, April 11, 2007 11:27:33 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, November 14, 2006

I finally got a chance to start looking at the blog posts that have been piling up in my Newsgator for a few months now, and I was pleasantly surprised by probably the best thoughts on SOA I've seen in a long time.  I'm really glad that Rocky's fighting the good fight on this one, and he's been consistent, too.  Another, more thorough commentary on the subject is provided by a good friend of mine, Tom Fuller, in his article last year on The Good, the Bad, and the Ugly of Service-Oriented Architecture.

The bottom line, IMO, is that working towards SOA is a good thing, but we have to be very cautious and extremely deliberate in how we get there.  I think most good architects know this, but we have to get the message out there and overcome the hype to minimize the trough of despair.

Tuesday, November 14, 2006 6:47:42 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, August 02, 2006

I just ran across Ryan Plant's post about the "Web Architect."  He seems affronted at the idea, but this was one of my titles in a former life.  So it's not so surprising to me.  There are indeed a lot of considerations (some of which Ryan talks about) that you have to think about when designing web applications that you don't have to think about in other kinds of applications.  It really does take a specialized set of skills.

That said, I don't think that having those specialized skills would necessarily qualify one as a web architect.  Given my previous thoughts on the subject (illuminated here and elsewhere), many of which seem to echo or be echoed in other publications on the quesiton of what is IT architecture, I tend to think that the web architect role would be a valid role if it was thought of as the individual responsible for a company's web presence.  There are I think distinct questions that have to be thought about in terms of the business and how it is represented on the web (at least on the properties controlled by the business). 

Depending on the company, there may be warrant to have an individual in a web architect role, which would of course assume the knowledge of the specific skills Ryan speaks to, but, more importantly, this role would be responsible to consider how to strategically take advantage of the web to address business needs.  In some companies, such a role may be subsumed into the greater enterprise architect or solutions architect roles, but in others, I could see it being a peer or possibly a report to the enterprise architect and a peer with other IT architects, working with him to coordinate technology application for the business specifically on the web.  This assumes that there is sufficient business need for such a distinguished role, not just a need for the web skill set.

Wednesday, August 02, 2006 2:13:41 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, July 19, 2006

While reading over the latest issue of Perspectives from the IASA, it struck me my current thinking about philosophy rings true for how I'm thinking about architecture, at least in one rather important aspect.  You see, in considering all of the various philosophical systems developed over human history, it strikes me that there is no one philosophy that suits all people, at least not realistically speaking.

Sure, as a devout Roman Catholic and amateur philosopher myself, I do think, ideally speaking, that Catholicism is the best philosophy for all human beings.  The problem is that, first, not all humans are philosophers.  Second, and vastly more importantly, all philosophers and non-philosophers alike are humans. 

As humans, we're made up of more than just plain ol' objective reasoning.  Indeed, I rather think that we are first and foremost a bundle of nerves and emotions, and only a few among us even try to tame that bundle into something resembling objective and rational thought.  Even those are still far and away subject to the non-rational whims of humanity, including prejudices, presuppositions, and all that other non-rational goo that makes us who we are.

This is why I say, realistically speaking, there is and can be no unifying philosophy that all humans can follow, as much as I might like for it to be otherwise.  I think this much has proven true in that neither by force nor by argument has any one philosophy been able to subdue humanity in all our history, despite attempts at it by both the very strong, the very intelligent, and the very persuasive among us.

If this is true, what is then the best thing that we can do?  Right now, it seems to me that perhaps the best thing that philosophers can do is to try to discover philosophies that are the best for persons with a given background, a given culture, and at a given time.  I don't think this is the same thing as relativism because, first, we can still talk about the best objective philosophy for all humans (even if all humans will never follow it), and second, we can talk about an objectively best philosophy for persons of similar backgrounds, cultures, and times.  We can still say that our philosophy is the best for humanity while realizing that perhaps the best for this person over here is another, given all the factors that have shaped him or her.

About now, my technical readers will be wondering when I'll get back to talking about architecture and how it relates to these ramblings, and, happily for them, here we are.  The most recent issue from the IASA has several articles purporting what it means to be an architect, how to become an architect, and how best to educate for architecture, among other things.  In reading these, I was struck (I should say again) that there doesn't seem to be one unifying idea of what it means to be a IT architect or how to become one.

Certainly, there are commonalities and core competencies, but I think that ultimately, the question of whether or not one can know if he is an IT architect (shall we say, the epistemology of IT architecture) and consequently whether or not you can tell someone else you are one, depends largely on the context of the question.  Just as there are many different industries, company sizes, and corporate cultures, so it seems there should be many different categories of architects to match. 

In an earlier blog post and article this year, I tried to throw out some ideas about what software architecture is and how we should be thinking about it.  I still think that the distinctions I was drawing are valid as are the key differentiators between software architects and developers, and incidentally, I'd suggest that the distinctions are also valid for the infrastructure side of IT.  It seems to me that the key defining aspect of an architect is the ability to tangle with both the business and the technology problems and effectively cut through that Gordian Knot, arriving at the best solution.

If so, then what makes a person an IT architect depends on the business at hand and the technology at hand, not on some presupposed host of experience with different businesses and architects.  The issue I take with Mr. Hubert in his "Becoming an IT Architect" (IASA Perspectives, Issue 4) is that it sounds as if one must have visited all his "stations" in order to know one is an architect.  While he starts out the article saying he is just recounting his particular journey, most of the article smacks of an attempt at generalizing his individual experience into objective truth, in much the same way that some philosophers have tried to draw out the best objective philosophy based on their own experiences and cultures.  In the end, such attempts invariably fall flat. 

Without digging into the specifics of the "stations" that I don't think are core to becoming an IT architect, let's stick to the central proposition at hand (which makes such a specific deconstruction unnecessary), namely that IT architecture at its essence is the previously described weaving of business and technology skill, with an admittedly stronger technical than business bent.  If that is the case, there is no one definition for what it means to be an IT architect, nor is there consequently any one path to become one.  With that in mind, reading Mr. Hubert's story is valuable in as much as one wants to know how to become a software architect at the kinds of companies, projects, and technologies that Mr. Hubert works with today, but it is only one story among many in the broader realm of IT architecture.

Rather than trying to establish some single architect certification that costs thousands of dollars and requires specific kinds of experience to achieve, we should think in terms of what it means to be an architect for a company of this size, with this (or these) primary technologies, this culture, and at this time in the company's life.  Only within that spectrum can we realistically determine the best definition of an IT architect, much like there may be a best philosophy for individuals within the spectrum of particular backgrounds, cultures, and times.

Does this mean we can't talk about skills (truths) that apply to all architects?  I don't think so.  The chief skill is what I've already mentioned (solving business problems with technology), but perhaps we could say that all architects need deep experience and/or training in a technology (or technologies).  Similarly, we could say that architects need training or experience in business in general (those concepts and skills that span different industries).  We might also say that they need training or experience in particular industries, at least one.  These individual truths combine to form something of an objectively best architect, but the specific best architect definition will vary depending on the context.

This kind of talk provides a broad framework for speaking about IT architecture as a profession while leaving room for the specific categories that could be specified to enable better classification of individuals to aid in both education and recruiting.  We already have some of these definitions loosely being developed with such terms as "solutions architect," "enterprise architect," and "infrastructure architect."  However, I feel that these may still be too broad to be able to sufficiently achieve an epistemology of IT architecture.  Maybe "enterprise" is the best one among them in that it historically does imply a large part of the context needed to have a meaningful category within IT architecture, but I tend to think that "solutions" and "infrastructure" are still too vague and lacking context. 

I don't propose to have the solution all worked out, but I do think that the key things, both in philosophy and software architecture, are to provide contextual trappings to determine the most meaningful solution to the problem at hand.  If that means speaking of a software architect for a local, small, family-owned brewery on the one hand, and an infrastructure architect for a multinational, Fortune 500, telecom company on the other, so be it.  But if we can generalize these sort of highly-contextual categorizations into something more usable for education and certification, all the better.  Granted, we won't have categories that sufficiently address every meaningful variation (as is the case with all taxonomies), but as long as we're working forward with the necessary framework of context, I think we'll get a lot closer than many of the current attempts that result in over generalization (and thus lose meaning as categories per se). 

In the meantime, I'd suggest that my assertion that the key distinction is in one's purpose (see the aforementioned article) is the best way to establish a basic epistemology of IT architecture.  I think it is certainly sufficient for individual knowledge and broad group identification, though clearly more needs to be worked out to assist in the development of training, education, and certification that will feed into trustworthy standards in the various categories of IT architecture.

Wednesday, July 19, 2006 10:30:40 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [2]  | 
# Monday, June 05, 2006

As most of you know who follow my blog at all, I recently joined Infragistics.  Well, I finally got around to getting my company blog set up, so if you're curious or interested, feel free to check it out and maybe subscribe.  While you're there, if you are a customer or potential customer, you might want to look around at the other blogs and maybe subscribe to some of them to stay on top of Infragistics stuff.

Monday, June 05, 2006 11:16:05 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, April 29, 2006

I just updated this site to the latest version of dasBlog.  Many, many thanks to Scott for helping me out with getting it (given that I am a total noob to CVS and, apparently, picked a bad time to start since SF was having issues).  Most notably (that I know of), this version incorporates using Feedburner, which I guess is the latest and greatest for distributing your feed and lowering bandwidth usage, though I'm sure there are some other goodies in there.

Anyhoo, let me know if you suddenly start running into any problems with my blog.  Have a good un!

Saturday, April 29, 2006 2:19:18 PM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, April 24, 2006

Not long ago, I polled subscribers as to what they're interested in.  There seemed to be a fairly even divide between what I'll roughly call Technical posts and Non-Technical posts.  In fact, my goal with this blog is to be a blend of those two general categories.  At the same time, as much as it hurts to admit it, I know that some folks really don't care about my opinions on non-technical matters.  So it struck me (some time ago, actually; I've just been lazy) to create two general categories using the creative taxonomy of Technical and Non-Technical. 

Why?  This is because dasBlog (and most other blog systems, I imagine) allow you to subscribe to category-based RSS feeds as well as view posts by category.  So from this day forward, in addition to the more specific categories, I'll be marking all posts as either Technical or Non-Technical.  If all you care about is one or the other, you can just subscribe to one or the other and never be bothered with the stuff you don't care about.

You can view/subscribe to the feeds using the feed icon next to each category in the list (of categories).  Here are direct links as well:

Technical

Non-Technical

I hope this helps!

Monday, April 24, 2006 10:28:33 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [0]  | 

In a recent post, I mentioned the acronym OPC, meaning "Other People's Code."  Somehow I doubt I'm the first person to use the acronym, so I don't intend to claim it as my own.  I can't say I've seen it before, but it seems so obvious that it should be one because OPC is so prevalent and we should all be talking about OPC much more than we do.  In an industry that arguably values reuse over every other virtue, you'd think that OPC would have long been canonized.

Yet it seems to me that when most people speak of reuse, they mean Their Own Code (TOC, I love overloading acronyms!) or, from their perspective, My Own Code (MOC).  In essence, they want other people to reuse their code, but there ain't no chance in heck that they're going to use OPC as a means to achieve the ultimate goal.  I want MOC to be reusable.  How can I impress my friends by writing code that can be reused by as many other people as possible?  This is something I think most of us that strive to be great at software think at one point or another, perhaps not in so many words, but ultimately, there is a sense of great pride when you polish off that last speck on your chrome-plated masterpiece, showing it to your buddies or the world in general and saying "that's MOC." 

The funny thing is that more often than not, the really ardent software folks among us, and even the less ardent, have a predilection for the NIH syndrome.  It's because we're all so damned smart, right?  Surely, those other folks at NIH Co. couldn't possibly have done it as well as I could have!?  Of course, we've got all the rationalizations lined up for when the business folks ask:

1) "I can't support that because I don't know it--I need the source code."
2) "You know, it won't meet our needs just right, not like I could do it for you custom."
3) "How much?  Geez.  I could write that in a day!"
4) "It's not using X, which you know is our preferred technology now."
5) "Did they promise to come work here if they dissolve the company?  I mean, you're just gambling on them."

And the list goes on.  We've probably all done it; I know I have.  Why?  Because, as one developer friend once put it (paraphrased): "I love to invent things.  Software is an industry where you get to invent stuff all the time."  In other words, we're creative, smart people who don't feel that we're adequately getting to express our own unique intelligence unless we write the code ourselves.

And now we finally come to what prompted this post.  I recently looked over an article by Joshua Greenberg, Ph.D. on MSDN called "Building a Rule Engine with SQL Server."  I'm not going to comment on the quality of the solution offered because I hardly think I am qualified to do so.  What I was completely flabbergasted by is the total omission of the rules engine being built into Windows Workflow Foundation.  Surely someone who has put that much thought into the theory behind rules engines, which, as is mentioned in his conclusion, are probably best known in workflow systems, would be aware of WF's own?  Surely one of the editors at MSDN Mag, which has done numerous articles on WF, including one on the engine itself published in the same month, would think it worth noting and perhaps comparing and contrasting the approaches?

Now, I don't want to draw too much negative attention to the article or Mr. Greenberg.  He and the editors are no more guilty of ignoring OPC than most of us are.  It is just a prime example of what we see over and over again in our industry.  On the one hand, we glorify reuse as the Supreme Good, but then we turn around and when reusable code (a WinFX library, no less!) is staring us in the face, an SEP field envelops reuse, enabling us to conveniently ignore OPC and start down the joyous adventure of reinventing the wheel.

This has got to stop, folks.  I'm not saying that this ignorance of OPC is the primary cause of the problems in our industry (I happen to think it is only part of the greater problem of techies not getting the needs of business and being smart enough to hide it).  But it is certainly one that rears its ugly head on a regular basis, as we guiltily slap each others' backs in our NIHA (NIH Anonymous) groups.  We have a responsibility to those who are paying us and a greater responsibility to the advancement of our industry (and ultimately the human race) to stop reinventing the wheel and start actually reusing OPC.  I'm not saying there is never a justification for custom code (God forbid!), but that custom code needs to be code that addresses something that truly cannot be adequately addressed by OPC. 

There will always be plenty of interesting problems to solve, which give way to creative and interesting solutions.  Just imagine if all this brainpower that goes into re-solving the same problems over and over again were to go into solving new problems.  Where would we be now?  Now that's an interesting possibility to ponder.

Monday, April 24, 2006 10:21:02 AM (Eastern Daylight Time, UTC-04:00)  #    Disclaimer  |  Comments [2]  | 
# Thursday, February 23, 2006

I just read an article in The Architecture Journal about this thing called BSAL (Behavioral Software Architecture Language).  The author bores us with the same old history of problems with docs becoming out of date, claiming that BSAL is the answer to all that, saying "the important addition that BSAL brings into play is the common, standard usage of a few building blocks in software definition," which are loosely "system, subsystem, state, behavior, and message objects." 

I'm having a real hard time getting excited about this.  In my (and many, many others') opinion, the real problem in business software engineering is not so much that documentation gets out of date (though that is definitely an issue).  The problem is that even our documentation--much less the implementation!--does not accurately describe or meet the actual business needs.  Further, because business needs constantly change, having a better way to do upfront design doesn't (and has proven not to) really solve the core problems; you need a way for your implementation to move with the business changes and, ideally, have that reflected in your design documents.

While I can see the value in having a more accurate overview of the system (and I'm really keen on behavior/process-based design), I think that BSAL is attempting to solve the problem from the solution domain perspective and not the problem domain perspective.  The behaviors that architects of business systems need to be concerned with, in my estimation, are not so much the behaviors of the system (in themselves) but the behaviors of the classes of business objects and, in particular, how those specific behaviors support a greater business process.  Any software architecture language that wants to solve business problems should speak in terms of the business so that the formalization of business processes and behaviors become the actual behaviors of the system.

And that's precisely why I see so much promise in the broader application domain-specific languages.  You can define DSLs that allow you to speak to specific problem domains.  Not only that, these languages can not only generate high-level stubs but can also potentially generate real, executable code.  They do so in ways that are far more pertinent to the problem at hand than any high-level, universal SAL like UML, which, by the way, what's so much greater about BSAL that UML doesn't provide for?

It is entirely possible that I'm not getting the real benefits behind BSAL, but as it was explained in that article, I just don't see it.  I think the software factories initiative has far more promise to make our industry better than yet another UML.

Thursday, February 23, 2006 2:56:55 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, February 17, 2006

Have you heard of the International Association of Software Architects?  If you’re a software architect, or even an aspiring one, you need to be aware of this inspiring organization.  While some architecture organizations focus on a top-down, global or vendor-based approach, the IASA focuses on all IT architects at every level, regardless of their vendor affiliations, starting at the local area.  To better serve this end in the central Florida area, Tom Fuller and I have started up the Tampa Bay chapter

The IASA’s sole aim is to provide value to each other and to make IT/software architecture a full-fledged profession with a significant knowledge base and quality controls.  Key goals of the IASA are:
   • To provide the latest news and articles in the architecture discipline.
   • To support the establishment of strong relationships among architects both as peers and as mentors.
   • To support and fulfill the needs for working groups as challenges in our industry call for them.
   • To provide both local and a global forum for debate of issues pertinent to the profession.
   • To enable each and every architect the ability to grow in the profession and to impact the software industry in positive ways.

If you want to know more about this organization, please visit the IASA Web site or contact Tom or me directly (or even comment on this blog).  To get involved and be aware of important announcements (such as meeting times) in the Tampa Bay and central Florida area, register on the site for the Tampa Bay chapter.   We look forward to seeing you there!

Friday, February 17, 2006 4:00:07 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, February 15, 2006

This is a continuation of a discussion I began with a short article on ASPAlliance.  Tom Fuller had some good comments, and I've since been discussing the subject with some other experienced developer-architects.  Both Tom and they have very valid and interesting concerns about what I'm proposing.  The biggest pushback has been on my definition of the architect role as being 1) not an überdeveloper and 2) not necessarily more valued or well-paid than developers.  Of course, there is also some vagueness around just what an architect would do, and that much, at least, was intentional.

First, let me say these propositions are not thoroughly baked in my mind, and rather than trying to tell everyone exactly how it should be, I’m just tossing out some ideas that may or may not pan out in the end.  This is the nature of thoughtful dialogue, and I appreciate anyone who has constructively critical feedback.  As such, I’m not promising any sort of unbreakable consistency on my part as the ultimate solution is the goal and not the presupposition of this dialogue.

Now, the core issues with both of the objections mentioned above are, I think, inherent in the way the architect role is perceived today.  Because architects are expected to design systems down to even the class/interface level, they are expected to be developers, and because they’re expected to know everything developers know and more, they should naturally be more valued and paid better by the business.  It may be true that this is the way things should be and continue to be, but what I’m suggesting is that this view of the architect role is in fact not optimal to achieve successful projects or even enterprises.

I think that if we did have a complete career track (such that architecture is in fact a distinct but related discipline), an architect's duties would be distinct enough from a developer's to not require “real-life” dev experience in order to be good.  This would be achieved not just through training or book knowledge but also from experience as an architect, starting as a junior and being mentored by senior architects.  This way you are still learning from experience but instead of learning developer lessons, you're learning architect lessons.

Similarly, because architects are responsible for a different set of problems, they don’t need to be an expert developer in the first place.  While their role may ultimately be perceived as more valuable to a business, it would not be due to simply being a developer++ but rather due to the distinct value they bring to the business.  And even within the architecture discipline, I imagine there’d be different levels of experience as well as different kinds of architecture responsibilities (solution architect, enterprise architect, and infrastructure architect, for example).  You’d have some junior architects who are less valued (paid less) than some mid-level or senior developers.

This does not preclude some cross-over between the disciplines—a senior developer would likely have a faster track to a senior architect role than a junior architect right out of college because there is most definitely a shared body of knowledge between the two. It may be that said developer simply has not had the opportunity to pursue an architecture path, or it may be that due to fiscal constraints, he’s had to play both roles and hasn’t been able to fully pursue architectural interests, which is often the case for smaller shops and consulting firms.

Of course, all of this implies a redefinition to some extent of the role of an architect--he'd take on some of what some people think of as business analyst functions, some of what might today be seen as project manager functions, some of what may be seen as QA functions, and even perhaps some of what might be seen as developer functions. 

The architect would take on the responsibility of understanding and assisting BAs in analysis of the business in terms of how and where technology would best apply to solve business problems.  She’d be responsible not for scheduling and facilitating the keeping of the project on track but rather in ensuring that the business needs are sufficiently being addressed by what the developers are building.  In ensuring the output of the project is aligning with standards and business needs, she serves in a kind of QA role.  And she might do some development of cross-functional solutions like authentication, authorization, and auditing, but at least she’d take on the specification of how these kinds of things should take place to ensure consistency in areas of cross-functional functionality and within the greater enterprise architecture.

I don’t think it is fruitful to further delineate responsibilities at this level because the specific responsibilities will vary based on the size of the development team(s) and, more importantly, the methodology being used.  The key realization is that the architect is the hub between these various disciplines, not some other person (such as the BA or PM).  The reason I think the architect should be this person is that the product being built is software, and you need an individual who is very well-versed in software design, the trade-offs, the technologies, and the patterns but who can also get and jive with the other disciplines to ensure that everything is working together in a complete picture of a well-oiled machine.  It is a very strongly technical position, but it is also a very strongly people- and business-oriented position.

To facilitate this change or (I’d suggest) better definition of an architect’s role, he'd be divesting himself of what some people see as being the “ivory tower” architecture role--the idea of a specification of the system to its lowest level of abstraction and the handing down of such a specification from on high to developers.  This is key in getting developer buy in to any kind of idea of architect, and it is key in the architect being able to take on a more holistic role in terms of the project.

Most devs want to do some design work—within their technologies and within specified business requirements, and I think in this world I'm proposing, they would.  The role of the architect in terms of system design would definitely be the big picture concerns--cross-functional and non-functional, cross-application, technical requirements, etc.--and the architects and devs would have to work together.  Design could not be done in a vacuum in either role.
At this point, the term "architect" comes into question as the metaphorical value becomes less.  Indeed, I think architect is probably not the best metaphor because we're dealing with designing computer systems that make business processes more efficient, not with designing a free-standing physical building.  So perhaps we've approached the problem wrong in the first place--the laws of physics don't change every week, but business processes can and do.

It may be that the role I’m envisioning is not in fact a refining of the architect role but rather the specification of a new distinct role.  But if that is the case, I question the value of thinking of architects and developers as distinct roles, which speaks volumes to the current confusion around the roles today.  Most developers do design work, and if the only distinction between the roles is whether or not design work is done, where’s the value in the distinction?  Why not just call all developers architects or all architects developers?

Truly, though, I think there is a distinction—the one I am trying to draw out via further refinement of what we might think of as the architecture discipline.  It’s partial refinement, partial redefinition, but I tend to think that this refinement and redefinition is necessary to not only enable the discipline to grow but also to be able to communicate a distinct value that a person in the “architect” role brings to the table.  He’s not just a developer (though he certainly can have development skills), he’s the key role—the hub between the disciplines we’ve come to realize should be involved in software creation—that ultimately makes or breaks a project.

That’s not to imply that the other disciplines do not add value by any means or that their failure will not break a project as well.  But the state of things today seems to be that these disciplines have a hard time of coordinating and actually coming up with a coherent solution that truly meets the business needs.  Up to this point, we’ve been refining those disciplines in themselves and trying to define processes, methodologies, and documentation to solve the problem of failing projects or projects that just aren’t solving business issues.  And last I read, as an industry, we’re still doing terribly poorly when it comes to project success from a business perspective.

So I think rather than solving the issue by further refinement of the currently-known disciplines, processes, and methodologies, we need to pull architecture out of the development discipline, pull out of development what needs to be pulled out with it (which does not mean all design), and give architecture new responsibilities (or at least better defined ones) that essentially relate to being the technical hub of the project or enterprise.

If at the end of the day we still call it the architect role is irrelevant to me.  I don’t think it fully speaks to the role I’m imagining, but it does to some extent, and since the term is fully entrenched, it may not be worthwhile completely changing it but rather just redefining it to something more pertinent to our industry that deals with business processes, not gravity, mortar, wood, plumbing, and electricity. 

On the other hand, software is a new thing in human experience, and I tend to think the repeated attempts to redefine terms from other industries in our own may not be the best approach.  Whether it’s likening it to authoring, architecting, developing, constructing, etc., none of these will fully speak to the roles necessary for successful software.  So we need to keep that in mind as we think about how we further refine our industry and be willing to coin new roles as they are necessary.  But that, I suppose, could be a discussion all its own.

Wednesday, February 15, 2006 5:13:31 PM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, February 07, 2006

Check out my latest piece on ASPAlliance and let me know what you think!

Tuesday, February 07, 2006 9:00:23 AM (Eastern Standard Time, UTC-05:00)  #    Disclaimer  |  Comments [0]  | 

Disclaimer
The opinions expressed herein are solely my own personal opinions, founded or unfounded, rational or not, and you can quote me on that.

Thanks to the good folks at dasBlog!

Copyright © 2014 J. Ambrose Little