# Peter Seale's weblog

## Opinionated SharePoint | the Joys of PowerShell | Awesomeness

Wednesday, February 27, 2013 7:30:26 PM UTC

#### Summary

Resharper is worth the money, but maybe not for the reasons you've heard.

#### Resharper: worth the money

Most people I've watched do presentations (and this isn't all of you, to be fair, just most) do two things:

1. Say "Resharper is great. I can't live without it."
2. Proceed to take 30 seconds fighting the code templates to create a simple property "string FullName {get; set;}

Alternately, they say something about how it does feature X (e.g. navigate to file) and can't do that with vanilla Visual Studio, unfortunately for them, while they weren't looking, the feature was introduced in Visual Studio some years ago.

Alternately, they say something about how awesome R# is while spending the first 15 minutes of the presentation installing and configuring it, including key bindings, live.

I love you guys etc, but these kind of demonstrations aren't helping.

So let's explain what it actually does, from someone who knows both Resharper and the Visual Studio equivalents. I love Resharper, but I think it's "important" that we all understand what makes it great. There are just a few things for me that make it worth its price:

• SHIFT+ALT+L - after navigating to a file, sometimes you want to browse files in the same directory. But you have no idea where you are in your thousands-of-files-solution. In vanilla Visual Studio you can do CTRL+ALT+L and it will open the Solution Explorer if hidden and set focus on it. Somewhere. Sometimes it will set the focus on the file you have open, sometimes not. With Resharper and SHIFT+ALT+L, you always navigate to the file you have open—the behavior is consistent and exactly what I want. Incidentally I miss this feature in Sublime Text 2.
• The rest of the navigation features, most notably navigating to implementations of an interface. Vanilla Visual Studio will navigate to a class, but it does not navigate to implementations of an interface.
• Browsing dll source code - R# will offer you the option of downloading Microsoft framework source (if that's what you want to see), decompiling the DLL and showing the decompiled source, or using what I assume is the built-in behavior of finding source from PDB files. There's also a feature to set breakpoints in this "browsed" code, but I never got that working for decompiled code in the R# 7 beta.
• Move entire folder at once - this is beautiful for bulk reorganizing code. Something that would take hours is automated and ~completely safe. The only downside I've noticed is that the bulk move operation does NOT use TFS's "move file" command and this means you will lose file history on anything you move. By the way, this is not just updating namespaces, you can move entire folders from project to project and it will figure out if that operation is even legal, then do it for you.
• The R# test runner is still better than Visual Studio's, even the updated Visual Studio 2012 test runner. The reigning king of test runners TestDriven.NET isn't free, so stop pirating it you pirate scum.
• "Move to matching file" is genius and perfect for those of us too lazy to create new files just to make a class.
• Dead code is highlighted (unlighted?) in grey. If you turn on solution-wide analysis, unused public methods and classes also show up grey.
• Resharper's squiggly warnings are too strict for my tastes (e.g. change every variable declaration to var, including primitive value types), but each squiggly rule can be configured or muted as desired. I've caught a few disgusting bugs with the squigglies, so for fixing those bugs alone I'd have paid the money.

Did you notice what isn't on the list? Code generation snippets have always been more trouble than they're worth for me. Refactorings like rename and extract method are included in Visual Studio, and the more complicated useful refactorings like "move behavior to parent class" I use seldom enough that I can live without them. These are good features that work and very easy to demo, but not worth paying for.

#### The World Of Duh: A blog series

Welcome to the World of Duh, a blog series in which I talk about something new to me in an informal, unresearched, and often factually inaccurate way. My goal with this series of posts is to help those similar to me. Given I didn't do much research on the topic, take it for what it's worth: just some guy's opinion. You're welcome.
Categories: World of Duh
Technorati:
Wednesday, February 27, 2013 7:30:26 PM UTC       |  Comments [4]  |  Trackback
Wednesday, February 27, 2013 6:14:33 PM UTC

#### Summary

Martin Fowler has written an excellent piece on estimation. Go read it. Inspired by his post, I have a tiny extra point to make: we still need to adopt Agile (whatever that means), even in 2013.

#### We're in the post-Agile era

It's 2013 and Agile hasn't solved every team's problems like we were promised. We blame Scrum, we blame ourselves for not having enough faith, and sometimes we blame individual Agile practices. Today the Agile practice under fire is estimation.

The short version of the complaint is that estimation is wasteful and you shouldn't need to spend valuable time doing it. Martin Fowler wrote an excellent piece that doesn't deny the cost of estimation, and gives some good tips for identifying when estimation is valuable. Go read the article.

Incidentally if it sounds like I disagree with his article in any way, just for the record, I'm on team Martin on this one. I don't pretend to have the authority to disagree anyway.

#### The danger of post-agile

The backlash against Agile is useful in that each time an Agile practice is assaulted, it is then defended. In turn the defense (like Martin's article) helps onlookers like me understand why the practice exists. That's a good thing.

The problem with all these assaults against Agile practices is that the old arguments for adoption don't fly as well as they used to. "Because Agile says so" is no longer a good enough reason to adopt it. And while examining each practice for waste is great for already high-performing teams, almost every team I've met or heard about could benefit by adopting ALL of Agile. ALL of it, including the wasteful parts—Agile is a net win for most everyone. Even in 2013. Even Scrum.

#### War stories

• I've met a team that does exclusive checkouts. That was 2012. In case you've never seen this, it means that when one developer checks out a file, no one else can open it for editing and must wait for him to finish before even beginning their own edits.
• The majority of projects don't use Continuous Integration, even to compile their project.
• I've met a team that did estimation (in hours). When asked what they do when they are over their estimate, they said they close the original task and "borrow" hours from other features assigned to the developer. I hid my horror well when I heard that.
• I have met developers that routinely check in broken code.
• I have met guys a few years from retirement that contributed nothing to a project.
• I saw a job posting from last year that said roughly "We have great developers, but they're not great with SVN. Your job duties include taking zip files via email and checking them in."

The point of these little bullets is to say that there is a lot of dysfunction in the world, and adopting Agile will at the least reveal that dysfunction.

#### Agile: bring the pain

So I entreat you, if you are starting out fresh somewhere and are wondering about this whole Agile stuff, look. You can skip Agile and write your functional-paradigm Lean project, no sprints, no formal retrospective meetings, no stand-ups, doing code reviews instead of pairing, doing BDD with no unit tests at all, all tests written after the production code, and all this done without estimation. Go do that, it's all good, you get it.

But if you're in a situation where you're not sure "what is it, you say, you do here," go ahead and buy the books and adopt strict XP/Scrum-flavored Agile and start fixing problems. And when someone brings up an argument saying "estimation is wasteful", maybe they're right, but it's more likely that they have never seen estimation done properly, and you just need to do inefficient Scrum by rote until you understand why. And if it truly isn't working, don't give it up just yet—try and understand many of the common team anti-patterns now disguised by post-Agile. A real problem I've seen with post-Agile is that people no longer try to make Agile work, and they justify all kinds of poor behavior under post-Agilism.

#### The World Of Duh: A blog series

Welcome to the World of Duh, a blog series in which I talk about something new to me in an informal, unresearched, and often factually inaccurate way. My goal with this series of posts is to help those similar to me. Given I didn't do much research on the topic, take it for what it's worth: just some guy's opinion. You're welcome.
Categories: World of Duh
Technorati:
Wednesday, February 27, 2013 6:14:33 PM UTC       |  Comments [0]  |  Trackback
Friday, February 22, 2013 9:19:55 PM UTC

#### Summary

Download ShiftIt, which helps you move/resize windows by assigning global hotkeys. Make sure you get version 1.6.

#### Introduction

I'm using a MacBook Pro on a day-to-day basis. My first (and lasting) impression is that they removed the 6 most precious keys on the keyboard—Home, End, PgUp, PgDn, Del, Ins, and they did it because they hate me.

But we're not here to talk about how much pain I've endured attempting to mentally map the Mac equivalents of "skip word", "go to end of line", "go to beginning of the line", etc. That kind of useless ranting is what Twitter is for.

#### Moving windows on a Mac - describing the problem

Today we're here to talk about the problem of moving windows around. Macs have inherited the Windows disease of opening each new application in a tiny portion of the available space (maybe they're sharing needles, I don't know), and Macs have gone a little further in that they made their maximize button tiny, secretly gave it two modes of operation, and refused to assign a global hotkey to either maximize operation. Safari must have gotten complications of the diseased window syndrome because it will simply not maximize, I don't know, Safari hates us I guess.

Most people I watch using a Mac go through a short-to-medium length ritual of opening a program—finding their program on the dock (of course), moving their trackpad mouse cursor over to the 5px wide button, clicking the green one, then watching as the program slowly maximizes to fill part of the screen. Then they remember, move their cursor over to the green button again, look down at the keyboard, find the shift key, press and hold the shift key, and click on the button. And the program really maximizes this time. And it's like a minute later, and they're done. Maybe this is how people take mental breaks—"I need a break. I know, I'll open a program on my Mac, that will give me at least a few minutes of downtime."

I don't know you people and I don't know why you're all so bad at this.

Anyway it's driving me a little bit crazy.

#### Stating for the record - Windows 7 is better out of the box

Windows 7 introduced keystrokes to maximize windows and move them from screen to screen. I won't belabor the point except to say that the details are here if you need them, and to say that this is a solved problem on Windows out-of-the-box.

#### ShiftIt to the rescue - moving/maximizing windows for Macs

Meanwhile all is not lost.

Some kind soul named 'fiknovik' on github is maintaining a perfectly good window management program called ShiftIt. Note I am linking directly to the (now hidden) downloads page of the project. In a fit of hilarity/incompetence/extreme unnecessary competence, I compiled my own version of shiftit before someone told me there's a downloads page.

Oops, I haven't even mentioned what ShiftIt does yet. ShiftIt assigns global hotkeys to common window resize/move tasks such as:

• maximize window
• move window to left/right half of the screen (I do this a lot with the Chrome Developer Tools window)
• move window to the other monitor, assuming I have two monitors available

Basically, it solves the "How do I move this window" problem in a way familiar to my Windows-thinking brain.

Version 1.6 introduced the ability to move a window to another monitor. 1.5 does not have this shortcut. 1.6 is labeled 'dev', but swallow your fear and be brave, and download the dev version so you can switch monitors painlessly.

#### The World Of Duh: A blog series

Welcome to the World of Duh, a blog series in which I talk about something new to me in an informal, unresearched, and often factually inaccurate way. My goal with this series of posts is to help those struggling with similar issues find a solution. Given I didn't do much research on the topic, the solution I propose may not be the best solution, just "a" solution. You're welcome.
Categories: World of Duh
Technorati:
Friday, February 22, 2013 9:19:55 PM UTC       |  Comments [2]  |  Trackback
Thursday, February 21, 2013 7:24:22 AM UTC

It took two minutes.

While browsing the discourse source code, and more specifically while attempting to load it in Sublime Text 2, I came across their sublime-project file. I'm already aware of these project files, which by the way are great for excluding files and folders you don't want to see in the sidebar, or see included in search results. I do a lot of 'Find in Files' in my day-to-day work, and sometimes get 5007 results, most of which come from a log file. Well I used to get the results, then I saw the light and used a sublime-project file and everything was great.

Fast forward to three minutes ago and my discovery of the Discourse project's sublime-project file.

#### You can set tab settings in your project files

This is something I had no idea was possible: project-specific tab settings!

  "settings":
{
"tab_size": 2,
"translate_tabs_to_spaces": true
}


Possibly the greatest thing about this little snippet is the line "tab_size": 2 is indented 4 spots from its parent. The rest of the file is consistently indented 2 spaces.

I don't know if the "for consistency, on this project we use 2 spaces for tabs universally, and btw this line is indented 4" situation is unintentional irony, but I'd like to think that someone did it on purpose. Because that's what I would do.

#### Final Note

Assuming I ever post again, when I do post again, it will be less researched, quick, unformed thoughts, or maybe things that are already obvious to everyone else. Basically like my twitter feed. I'm thinking of calling this series of posts The World Of Duh. You're welcome.

Categories: Ruby
Technorati:
Thursday, February 21, 2013 7:24:22 AM UTC       |  Comments [0]  |  Trackback
Tuesday, August 14, 2012 7:03:30 PM UTC

Categories: .NET | Awesomeness
Technorati:  |
Tuesday, August 14, 2012 7:03:30 PM UTC       |  Comments [1]  |  Trackback
Wednesday, March 28, 2012 1:06:09 AM UTC

Sometimes when working with the Microsoft stack, you'll be offered information on an "NDA", or non-disclosure, basis. Don't do it.

I'm still not sure why Microsoft keeps so much of their product development under wraps, but as they compete with Oracle and I don't, I don't blame them. It probably has something to do with "battlecards", which are the most ridiculous/effective thing I've ever heard of.

If you haven't heard of battlecards, imagine Pokemon, but with ECM systems. The IBM guy says "your database doesn't scale!" and if you haven't memorized the appropriate response line on the battlecard (by the way, the correct answer to any scaling question is "you're a towel!"), you lose the ECM Pokemon battle and surrender the sale. Whoever wins the most ECM Pokemon battles appears as a "visionary leader" at the top right of the Forrester magic quadrant.

Also, if we're playing ECM Pokemon, if one player offers to "build an ECM from scratch", they're tarred and feathered, and declared anathema, and a heretic. I don't make the rules, I'm just telling you what the rules are. Tarring and feathering is in the ECM Pokemon rulebook, right underneath the part endorsing referee bribes.

Anyway. I've only received NDA information a few times, and have never benefited from NDA knowledge.

Instead of benefiting from my NDA, all of a sudden I had to concentrate on censoring myself at all times. I had to censor my blog posts and conversations. And worse, my tweets. My tweets!

There's really not much more to say about NDAs in the Microsoft ecosystem. Unless the NDA offers a career-changer (such as getting access to the newest SP a full year ahead of the public), don't receive anything NDA. At best you'll satisfy your curiosity, and at worst you'll get yourself in trouble (the other career-changer).

Categories: .NET
Technorati:
Wednesday, March 28, 2012 1:06:09 AM UTC       |  Comments [0]  |  Trackback
Sunday, March 04, 2012 9:36:15 PM UTC

Apparently one of the topics of discussion at Pablo’s Fiesta was whether TDD is a fad.

As a kind of response to the question “is TDD a fad”, let me focus on something everyone likes to talk about, and that is me. Me me me me me. Not you—me.

### My story before Test-driven development

It’s college. I’m learning about object-oriented programming and have a pretty firm grasp. I can make classes, methods, static methods, and even make the right decision as to whether to go with a struct rather than a class*. I even know about singletons.
*"struct vs class" – in case you're wondering, the answer is "always class, unless you're

Unfortunately, my compiler project is a complete mess. I use the same data structure (let’s call it a class) for each stage of the compilation process. I sit frozen at the keyboard, sometimes with a piece of paper and a box-and-arrows-looking diagram, sometimes just sitting slack-jawed staring through the wall behind the monitor, trying to figure out where to put that behavior I need to implement.

It’s slow going. It’s a lot of rewriting. There’s dead code everywhere, some of which I know about, some of which I don’t. I try to map out everything I need to get this working, and stub out some of the methods I will need later. Sometimes I forget what I’m doing mid-step and just…blank out.

### My story from last Friday*, at work

* Last Friday…in September. Through the magic of first forgetting, then rediscovering this draft, I am able to traverse time itself.

First, I read the user story to make sure I knew what I was supposed to do. Something about adding another bit of our application to be searchable. Check. Once I had a vague idea of what to do, the first thing I did was write a test that spans all the way from a SearchViewModel down to the database (and yes I said database. It’s a simple search, we’re not using Lucene or anything crazier, lay off me). Specifically, I wrote some code to a) create an entity, b) save the entity to the database from whence it will be searched, c) get me a SearchViewModel in as similar a fashion as possible to our WPF-based UI, d) ran the search, and e) inspected the ViewModel for the search results I expected.

With this large (and yes, slow) test harness supporting me, I went on to implement the search functionality.

#### An aside

Let me take a moment to talk about a few things I didn't test. I didn’t write a unit test for every interaction within my own internal search API. I didn’t write a test from the ViewModel that mocked out the search service (Searcher) to test interactions. In my test, I didn’t even inspect properties of the search results to ensure that I’m getting the right search results, just lazily counted how many search results come back. See, I already have tests that verify each and every property coming back from search results, so why would I bother checking every property in every test? Anyway. "What to test" is the subject for another day, and no, I don't have the final answer either.

#### Back to my last Friday

After implementing enough code to make the first test pass, I ran the UI on my machine and verified that searching did everything it was supposed to. No problems this time—wait, I somehow mismatched two properties—oops, need to fix that. Went back (without writing a test or adjusting a test to test for the bug I just identified), made the necessary code change, fired up the UI again and inspected.

After I verified the code was in a stable state, I went back looking for things to clean up. No methods to rename, no dead or “temporarily commented-out” code, no code sitting in the wrong class. This time. So no refactoring work needed.

#### Another side note

Most TDD instructions will tell you to only implement the bare minimum needed to make a test pass. This is good contextual advice, given that the vast majority of developers create "speculative" methods and functionality and need to learn how to do truly emergent design (also known as design-by-example, or, the "YAGNI-You Ain't Gonna Need It" principle of design) via TDD.

But, we are also told that it's okay to do a some up-front design. Depending on who you heard it from, sometimes you hash out a class diagram that fits on a napkin (then as they famously say, throw the napkin away). Or, you can use CRC cards or Responsibility Driven Development(?), and Spiking, all of which, even if you don't know what they mean, sound like they involve doing something above the bare minimum needed to make a test pass. Anyway. Kent Beck's TDD book even tells you Triangulation is only one of the approaches to making a test pass, another approach being "just write the code you actually want in your finished product AKA the Obvious Implementation".

Okay. So here we are. We have the same people telling you you should do a) practice "pure" TDD by doing zero up-front design, and then b) telling you all these other things that directly contradict a). I've personally reconciled the conflicting advice as follows:

1. Practice YAGNI. Speculative design tends to be bad, and from what Resharper tells me, the rest of you leave a lot of completely and obviously dead code lying around, not to mention all the extra unused public methods and classes you can find with FxCop or Resharper's "Solution-wide analysis". You guys, you guys.
2. But do spend some amount of time thinking ahead, and maybe just implement the code you want to end up with, not strictly the bare minimum needed to make a test pass. If you have an Add(a, b) method, instead of writing 50 tests, and triangulating towards "return a+b", you're allowed to write a+b the first time and write enough tests to catch all scenarios. Triangulation helps keep my mental load down so I can keep moving towards solving the problem, and because I often find that having solved the problem through Triangulation, I have followed YAGNI and an unexpected design has "emerged".
3. If you're not sure if you should follow #1 or whether you are allowed to cheat and follow #2, well, follow #1. Don't be "pragmatic" and use the word "pragmatic" as the blanket you use to justify whatever you want to justify.

If the YAGNI style of programming is new to you, the amount of code you'll rewrite while attempting it at first will be staggering, but through the pain, after rewriting everything about 5 times and deleting 2/3rds of your codebase, you'll figure out that you haven't really been applying YAGNI. And then you'll rewrite everything a few more times.

I'm probably sounding a little preachy, so let me be clear: I'm talking to myself from a few years ago. And I'll probably come back some years later and yell at myself for dismissing the relative value of unit tests versus integration tests. Yes, see you in 2020 for the follow-up post: "Peter is wrong: Again. Part 17 and counting".

#### Back to last Friday (again)

I run the tests (I have about twenty by the time I'm totally finished), do a quick diff on the files I have checked out, check in, and verify the build remains green. Our build compiles and runs all tests in about ~30 minutes. It’s slow. And no, I didn't check in every time the tests turned green (though I do shelve frequently). This paragraph sponsored by twitter hashtag #realtalk. #hashtagsInBlogPosts #weUseTfs

#### A world of difference

Being able to take a vague idea of what I need to do and steadily translate bits of requirements into working code is a world of difference from me in college. I haven't reached the summit—I probably shouldn't have to write the phrase "I haven't reached the summit", but just in case: "I haven't reached the summit. No comments please."

And, back to the original question: is TDD a fad? As for me:

Learning (and applying) Test-Driven Development has improved my programming ability more than than anything else, by far.

So to answer the question: hey. Hey. Let's be pragmatic and not get too carried away. Maybe it is a fad. It's good to be pragmatic, not too far on the left, not too far on the right, but somewhere in between in the pragmatic zone (the region where pragmatism reigns). Pragmatism is delicious when spread evenly over toasted bread and served with tea. If someone is drowning, be pragmatic about the situation. If you had to choose between having a tail or the gift of flight, be pragmatic.

Categories: .NET
Technorati:
Sunday, March 04, 2012 9:36:15 PM UTC       |  Comments [0]  |  Trackback
Wednesday, January 25, 2012 6:20:41 AM UTC

### tl;dr

If you're considering trying out Ruby, just run Ruby inside of Ubuntu on a VirtualBox VM instead of Ruby on native Windows, because the Ruby/Ubuntu/VirtualBox combo is completely painless. I might even be so bold as to say flawless.

### Linux is painless (today)

As someone who has lost hours and hours and hours unsuccessfully troubleshooting, and as someone who has experienced a Personal Complete And Total Data Loss Incident, I want to acknowledge that running "linux" in its many flavors can be painful.

And it is with that in mind that I want to let you know that, as of January 2012, I'm having no problems.

With no special configuration required, I've set up:

• VirtualBox (free for non-commercial use)
• the newest stable Ubuntu 64-bit release
• with an instantly adjustable monitor resolution via VirtualBox extensions for Ubuntu and RIGHT CTRL+F
• with working sound *note: don't take this for granted, you punks
• with both Chrome and Firefox
• with internet access, even when I switch from ethernet to wireless and back (my fellow former VirtualPC/Virtual Server users are having PTSD flashbacks right now, sorry)
• with equally snappy performance as the host Windows 7 machine
• with easy and "Windows intuitive" text editing via Sublime Text 2 AKA "new hotness" – by the way, Sublime Text behaves identically on Windows and Ubuntu, so I'm having zero Text Editor Culture Shock. We can talk about text editors later—today, the point I want to make is that by using Sublime Text I can defer "the talk". Compare to the past where my choices were vim (:qA!), emacs (CTRL+X, CTRL+C), and pico (oh sweet, sweet menus written in English!); or the past where I couldn't figure out how to get pico installed and worked with vim. (Vim protip: press "i" and it goes into Normal Mode, then anytime it starts acting weird, hit ESC a bunch and try and get back into Normal Mode. And yes, I said "protip".)
• with copy/paste between host and VM
• with working VM pause/resume that takes a grand total of 2 seconds

So to be clear, it wasn't always this easy.

It probably took less time to install and update Ubuntu than a Windows 7 VM, and I've done both recently, so I guess that makes me a leading world authority on how long it takes to install operating systems on VMs.

It even took less time to blunder through apt-get-ting/gem-ing/bundle-ing all the dev tools on Ubuntu than to sleepwalk through the VS2010 + SP + SQL + SP installers.

So there's your anecdote. As of January of 2012, it's easy.

### What does this mean

If you're considering tinkering with "the Ruby" or whatever*, just install VirtualBox and Ubuntu…or whatever works for you. I'm just here to tell you that it's very easy to get an Ubuntu VM set up and running, and it's easier than trying to get Ruby working on Windows.

And, when the Ruby On Windows Pain Factor dramatically drops (like it did with git—oh by the way—if you haven't heard already, running Git on Windows is easy now), maybe you'll hear from me again.

Categories: Ruby
Technorati:
Wednesday, January 25, 2012 6:20:41 AM UTC       |  Comments [0]  |  Trackback
Wednesday, January 18, 2012 4:48:35 AM UTC

This may be good general advice, but today I just mean it in the context of using PowerShell's call operator (the glorious &, AKA "The Ampersand").

I could spend a lot of time building up to the good stuff, but I'll just get to the point. I'm going to run "echoargs" which most recently helped me troubleshoot calls from PowerShell to MSBuild.exe. You'll see why I need this utility soon enough:

Okay. That was the easy part. So far, when calling commands from PowerShell using the call operator, everything is pretty much working as expected. Now let's try something…different…:

I'm not exactly sure what to say here. The first example is a thinly-obfuscated real-world head-scratcher I've stumbled into over and over and over. The second line, I wrote to try and make some sense out of PowerShell's parsing rules. And when I got the output for the second line I can only make sense of by using parsing rules like "throw away some of the quotes, then start parsing" and "if the quote-marks are on the left side of the word, move them to the right". You won't find these parsing rules in an example in the dragon book.

So I kind of gave up.

You see, I had a longer, well-reasoned blog post planned out. In my pretend fairy land, I'd spend a few minutes doing research, master PowerShell's parsing rules, and write a helper method to encapsulate the weirdness so you and the rest of the world could live out your sheltered hobbit lives in the Shire, never understanding the service I provided for you. I'd be the Aragorn of this story, and would be pretty rad compared to you lame-os.

I even had a "reasonable explanation" for the weird behavior to link to here. And don't get me wrong, that's good information.

But nothing explains "   1 2 3 4", followed by "5", followed by "6    7 8 9" as your argument list.

### Lesson learned: don't be me

There's probably a better lesson to be learned, like

a) trust PowerShell's call operator syntax about as far as you can throw it, and

b) when you throw it, watch the skies carefully, or the moment you turn away PowerShell will boomerang back at you and aim for your throat.

Okay.

Furthermore, echoargs.exe, which ships with the PowerShell Community Extensions, is built for the sole purpose of troubleshooting this kind of weirdness. It's useful, it's small, and it's safer than taking a boomerang to the throat every time you test.

Furthermore, when using the call operator (&), use the more explicit, longhand form. Even though it makes most calls unreadable to humans, for those of us who matter (the parser), it is clear as day. See screenshot + gaudy green text below:

Furthermore, if you're writing a generic script that accepts input you can't control, and some of that input may or may not include quote-marks…find whoever is responsible for assigning you such a doomed task and punch THEM in the throat*. They deserve it**.
* don't do this
** they probably don't

By the way, if you know why these rules are the way they are, by all means answer the question here and I'll give you the appropriate whuffie or whatever they call it these days. And no, spell checker, 'whuffie' is not a misspelling.

### Hope for the future

Just so you know, we may see a fix for this class of problem in PowerShell v3.

Categories: .NET | PowerShell
Technorati:  |
Wednesday, January 18, 2012 4:48:35 AM UTC       |  Comments [0]  |  Trackback
Tuesday, January 17, 2012 2:38:36 AM UTC

### Or, "How to avoid crashing Visual Studio while working with XAML"

Our project may have problems. I don't know. What I do know is that, when you open a XAML file in Visual Studio, you are officially in The Danger Zone. And, after much careful thought and dozens of "unpredictable" crashes, I've identified the problem.

Well, of course I haven't identified the real problem. But I've found a suitable way to tiptoe around the problem.

### A Simple Workflow

1. Open the XAML file for editing. Whether or not you have design view visible in any way is immaterial. This happens in both code view and design view, and split-screen view.
2. Make any and all changes.
3. Save your file. (This step, while not necessary, will make it easier on you in the Visual Studio recovery process post-crash.) It's important to note that at this point you've entered The Danger Zone.
4. Observe your task manager's real-time CPU chart max out one of your CPUs. You're still in The Danger Zone.
5. Close all XAML files. You may leave code (C#) files open in Visual Studio if desired. This will trigger further processing.
6. Continue to observe devenv.exe's CPU usage.
7. When CPU usage drops to 0%, even for a short while, let out a yolp of joy! You've passed through The Danger Zone. Give yourself a pat on the back. (Yes, I mean physically give yourself a pat on the back. It's awkward, but you've earned it!)
8. Now you can run your application without crashing Visual Studio!

### Ways to know you're in The Danger Zone

1. Visual Studio crashes when attempting to "Play" or launch your WPF project from Visual Studio.

2. Visual Studio crashes when it receives focus again sometime while running your WPF app.

3. Visual Studio crashes when you terminate your WPF application.

### Highway To The Danger Zone, by example

This just happened two minutes ago. I'll point out I avoided crashing Visual Studio again, thanks to my stick-to-it-iveness. Enjoy.

PS—I have a quad-core machine, so this graph represents one of the four CPUs entirely pegged by devenv.exe. I don't know why, nor am I particularly interested to report a bug. I just know how to avoid the crashing and whatnot.

Categories: .NET
Technorati:
Tuesday, January 17, 2012 2:38:36 AM UTC       |  Comments [0]  |  Trackback
Tuesday, January 10, 2012 7:06:55 AM UTC

### tl;dr

MSBuild’s OutDir parameter must be of the form:
/p:OutDir=C:\folder\with\no\spaces\must\end\with\trailing\slash\
…or of the form:
/p:OutDir=”C:\folder w spaces\must end w 2 trailing slashes\PS\this makes no sense\\”

I have written a self-contained PowerShell function to handle OutDir’s mini-language that exists because…I don’t know why, because they hate us? Anyway, the script is all the way at the bottom. PS “backwards compatibility” is code for “we hate you,” in case you get “backwards compatibility” as the reason OutDir’s syntax is so hostile on your Connect issue you filed so diligently. That’s also a trick, because you’re not supposed to file Connect issues.

### I hate you, OutDir parameter

Okay, so the post title is unhelpful. Deal with it. I’m in pain, and a suffering man should be afforded some liberties. I’m like Doc Holiday—minus tuberculosis, plus build script duties. Or the whooping cough. I didn’t pay much attention during Tombstone, but he did cough a lot. Could be parasites.

Build script duties are some of the worst, alongside SSRS reporting duties, SharePoint integration duties, auditor-friendly deployment documentation duties, or any combination of those three. I don’t know what IT auditors do for fun—I simply can’t imagine. I don’t know if they can either. Think about it.

…back to build scripts. A bad build script will kill your chances of getting any kind of an automated deployment working, and if you can’t do builds or deployments well, you end up editing your production web.config in production and writing Stored Procedures because deploying code is just so painful. And then no one wants to deploy because it takes about three weeks and seventeen tries before you get it right, and no one’s writing any sort of automated tests around your stored procedures (except that one guy who’s waaay to excited about T-SQL, but he writes try/catch blocks in T-SQL and is pushing for Service Broker, so…can’t trust him), and this has all kinds of implications, and then all of a sudden exclusive checkouts sound like a good idea, and you wake up one morning and you’re doing Access development. Again(!!!). Except less productive. And your customers don’t trust you, and then one day you’re just fired outright, and the next day you’re on the street, and then finally, out of options, you reach the lowest low—you develop and release an app on the iTunes app store. Lowest of the low. Can’t possibly get worse, unless you’re forced to write code in Ruby, which requires you join the Communist Party, as is clearly written in the AGPL (yes, this is why Microsoft wrote their own GPL—they’re fighting both terror and communism, and socialism—one license agreement at a time). This is why you read the EULA. Communism is why.

Anyway, MSBuild’s OutDir parameter isn’t making my build script duties any easier.

### Regarding OutputPath

I tried researching OutputPath, but it looks like a different metaphorical universalist path up the same mountain named “appending 1 or more slashes to the end of everything for no reason”, so I gave up. When it comes to doing in-depth research on any framework, including, and today featuring MSBuild and its wonderfulness, you either find out that a) you were woefully ignorant all along and just needed that one tidbit of knowledge, with which you can SUCCEED, or b) you were unfortunately justified in distrusting your framework because your framework has FAILED you. After a few extremely painful episodes, I started giving up early and looking for a workaround, which turns out is what most people do anyway.

OutputPath smells like it has the same problems that OutDir has, so I just gave up on it and went with the workaround (below). I could be wrong about OutputPath. Blame SharePoint for my wariness.

### But I’m not only here to complain

I’m here to complain, don’t get me wrong. Like a wounded Rambo provided with only fire, kerosene and his trusty serrated knife, I’m writing this post as a kind of Rambo shout before I pass out from the pain after cauterizing my wound the Rambo way. Life sucks*.
*not actually true

But I’m also here to let you know, hey, if you’re in the Cambodian jungle* with a bullet wound and you’ve got to do something, here’s what you do. Maybe you won’t bleed all over the flora and fauna** with your bullet wound in the Cambodian jungle as long as I did, maybe this post will help you along in your journey…whatever that journey is. It’s a journey of some kind. Let’s not stretch the metaphor too far. Wait, aren’t we talking about build scripts?
*I am not going to do any research, do not question or fact-check my Rambo knowledge. Just assume I got it right.
**it seemed like the right thing to say at the time

### Why: A brief explanation why OutDir exists

Now, onto something resembling a technical blog post.

OutDir exists so that, when compiling a Project (e.g. “msbuild MyProject.csproj”) or Solution (e.g. “msbuild MyManyProjects.sln”), you can tell MSBuild where to put all the files. Or if you like fancy words, “compilation artifacts for your ALM as part of your SDLC”. You’re welcome. I’m SDLC certified 7-9 years experience, ALM 8.5 years, MS Word 13 years. Hire me, I’ve got an edge on the other candidate by 2.5 years SDLC and a whopping 9 years MS Word. Numbers can’t lie! Plus I’ve got 5 years OOP, 3 years OOA, 4.5 years OOD. You can’t argue with numbers.

Where were we? Ah, putting compilation artifacts in folders. Without OutDir, you don’t have that control.

Let’s take the simple example. “msbuild MyProject.csproj” will put MyProject.dll in the bin\Debug subfolder, just like compiling from Visual Studio. If you set the configuration to Release, ala “msbuild MyProject.csproj /p:Configuration=Release”, everything will be dumped into bin\Release. If you have no idea what’s going on and you make a third build configuration, e.g. “msbuild MyProject.csproj /p:Configuration=Towelie”, the files will be dumped in bin\Towelie.

You get the idea. By default, files go in build\$Configuration, whatever$Configuration happens to be at the time.

So here comes OutDir to shake things up. Let’s try a simple example:

msbuild MyProject.csproj /p:OutDir=C:\temp\MyProject

Haha! Tricked you! This simple example doesn’t work! You forgot the trailing slash!*
*serious aside: would it have taken more effort to write and localize an error message in seven hundred languages including Bushman from Gods Must Be Crazy 2, or just accept the path without a trailing slash and fix it for us? I can’t imagine it would be harder to just scrub the input. I’m serious. I’m Batman voice serious. Seriously.

Okay, let’s try this again, but after paying the syntax tax:
msbuild MyProject.csproj /p:OutDir=C:\temp\MyProject\

You get exactly one guess what happens. Okay, who cares, I’ll just show you.

So you get the idea.

### A second example, this time illustrating the use of path names with spaces

Okay, first off, MSBuild’s OutDir parameter is only one of the many, many reasons that I dislike spaces in filenames, path names, even passwords. I mean passphrases. Of course I mean passphrases. Passwords are crackable. Passphrases are the way to go.

Don’t even get me started about Uñicode support.

Second, let me point out that I can work perfectly fine without setting OutDir. I know where my files go, and I know how to reliably copy files from bin\debug folders directly into production as part of my nightly build process (PS for the humorless, don’t try that). But, I need OutDir, because TFS’s default build definition uses OutDir whether you like it or not. And, in the course of setting up a working TFS 2010 build, at the time I needed to a) understand, and b) simulate TFS’s compilation process.

Anyway, some of our TFS build names have spaces in them, which means that some of the folder names have spaces in them, which means that my script that calls OutDir needs to handle folder names with spaces in them. Let’s try vanilla latte half-chop burned-choco cream soda vento rico suave way of calling OutDir and see what happens:

Okay, we cheated somewhat, because we didn’t even bother to surround our long path name with quotes. Rookie! Let’s try again:

Okay. Surrounding your long path name with quotes, along with the trailing slash isn’t cutting it.

This “Illegal characters in path.” error message is where I’ve lost probably…let’s not estimate, my professionalism will be called into question. Anyway, let’s just say “a lot of time” was lost on this problem.

So here’s the solution:

I don’t know why, and at this point, I’ve lost the fighting spirit. It’s setting an output folder in MSBuild after all, I’m not exactly writing a new OS scheduler, though I have a vague idea that OS scheduling is not like Outlook scheduling, and my resume says I have 3.5 years of OS Scheduler experience, so I can speak to it.

Someone in the comments of this blog post suggested the double trailing slash solution, and what you do know it worked, and here I am much later writing a blog post that is way too long to justify this much effort.

### Wrapping up what we’ve learned today, in bullet point form

• Doc Holiday has either TB or the whooping cough. Or parasites.
• They hate us:
• MSBuild’s OutDir parameter must be of the form:
/p:OutDir=C:\folder\with\no\spaces\must\end\with\trailing\slash\
• …or of the form:
/p:OutDir=”C:\folder w spaces\must end w 2 trailing slashes\makes no sense\\”

### Wrapping up what we’ve learned today, in PowerShell function form

Enjoy. There’s almost nothing special about this. The value Run-MSBuild gives you is that it hides (or if we’re using the fancy words, encapsulates) the horrible rules OutDir imposes on us, freeing the caller to worry about, oh, I don’t know, writing an OS Scheduler.

Feel free to cut-and-paste. I’m not going to force you to join the Communist Party like the AGPL does.

And do note the commented-out psake-friendly line. Psake’s Exec function exists to encapsulate the weirdness with executing DOS commands from PowerShell. I figure, if you’re calling MSBuild, chances are good you’re calling it from psake, but if not, here’s a script that will bubble up a reasonable error message to the user.

Psake or not, if you’re calling this PowerShell script from TeamCity, the error message will bubble up to the top. If you’re using TFS, follow these instructions to experience the joy that is visual programming (and yes, you’ll also get good error messages bubbled up to the top).

Also, this isn’t one of those bulletproof, general-purpose functions, what with proper types and default values for each argument, logging via write-verbose, a –whatif switch, documentation, and whatever else I’m ignorant of. Of. I don’t do that day-to-day for my PowerShell scripts. I just write what I need today, and maybe generalize what I have if I use the same function twice in a script. It’s not like sharing functions between PowerShell scripts is desirable. Like sharing needles. A discussion of the merits of needle sharing is a good way to wrap up a blog post. And on that note, here’s the script:

$msbuildPath = 'C:\windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe' function Compile-Project($project, $targets,$configuration, $outdir) { if (-not ($outdir.EndsWith("\"))) {
$outdir += '\' #MSBuild requires OutDir end with a trailing slash #awesome } if ($outdir.Contains(" ")) {
$outdir="""$($outdir)\""" #read comment from Johannes Rudolph here: http://www.markhneedham.com/blog/2008/08/14/msbuild-use-outputpath-instead-of-outdir/ } #if you're calling this from psake, save yourself the trouble and use their "exec" command. #psake: #exec { &$msBuildPath """$project"" /t:$($targets) /p:Configuration=$configuration /p:OutDir=$outdir" } #Vanilla PowerShell, non-psake: &$msBuildPath """$project"" /t:$($targets) /p:Configuration=$configuration /p:OutDir=$outdir" 2>$msbuildErrOutput
if ($lastExitCode -ne 0) { write-error "Error while running MSBuild. Details:`n$msbuildErrorOutput"
exit 1
}
}

Categories: .NET | PowerShell
Technorati:  |
Tuesday, January 10, 2012 7:06:55 AM UTC       |  Comments [2]  |  Trackback
Friday, December 02, 2011 1:17:40 AM UTC

This one’s for the search engines, folks, but also for those of you who have the same assigned corporate laptop I have. Allegedly.

As far as I can tell, and from what I’ve found on this forum thread, the Dell E5420 (or as my coworkers say, the “Leapfrog Laptop”) can support a second internal hard drive through the DVD drive bay.

It’s important that I mention a few potential problems:

• You’re going to have to use a screwdriver to open your laptop and exchange the bay drives. They are not hot-swappable. (I’m perfectly okay with this)
• The custom drive caddy is not designed for the E5420 and won’t line up flush with the laptop. As they point out in the thread, allegedly you can harvest your CD drive faceplate and re-use it. Allegedly.
• The whole idea of buying third-party electronics for $20 from small internet vendors and installing them in the very heart of your primary work machine is unnerving. (Even so, I’m okay with this.) If I ever pull the trigger and buy one of these, I should probably post an update. As a final point, let me say that you can buy secondhand/refurbished/possibly “hot” docking stations for the Leapfrog Laptops for ~$100ish. They’re a good investment to haul to your work site, or for home. I’m a big proponent of true dual+ monitor setups. And having tried, hate working with non-docked laptops.

Note Dell has changed up their docking stations, so your old docking station won’t work anymore (those old ones can be found online for as low as ~$20). On the plus side, the new docking stations (E-Port) have dual-DVI output. And if there’s something I hate worse than using a laptop as it was intended, it’s straining my eyes trying to read blurry ClearType text over an analog signal (i.e. VGA cables). The old docking station had one DVI and one VGA (what I now know as “VGA: the eye killer”) port. Ask me about my pixel-free weekend sometime. Categories: Technorati: Friday, December 02, 2011 1:17:40 AM UTC | Comments [3] | Trackback Friday, November 11, 2011 11:07:42 PM UTC Every post I write should come with this standard disclaimer. If I ever re-do my blog, I’ll link to this standard disclaimer from the top of every blog post. ## This Stuff Just Doesn’t Matter This stuff, this software development stuff in and of itself, just doesn’t matter. It isn’t the end goal. There are bigger things in life. All things being equal, it is better to be a competent software developer than an incompetent software developer. This is why I write posts about how to invest my limited time. All things being equal, it is better to be learned rather than ignorant about software development practices. This is why from time to time I feel the urge to linkblog posts I find on twitter that I believe my blog audience (i.e. you) haven’t seen, and may benefit from. My linkblog posts are gold, I tell you, gold. All things being equal, it is better to be intentional about your career path and career goals, especially when it comes to dealing with Microsoft’s endless framework lahar. I see a lot of time wasted on studying for exams, and attention given to half-baked frameworks that subsequently under deliver. And I don’t know why, but I have the urge to fix this problem. For those of you who could not care less about helping others make wiser choices with their learning investments, sorry, but it’s who I am, and it bothers me enough to blog about the topic…frequently. All things being equal, it is better to go to work and experience less unnecessary pain. This is where a lot of my “written for the search engine” and “suriving TFS” posts come from, and where I hope most people find value. I write many of my blog posts with the singular goal of reducing pain. Pain isn’t the ultimate evil. (There’s a great discussion about pain in A Canticle For Leibowitz, which by the way is the first post-apocalyptic book, but I’m too lazy to find the exact quote. PS—dork alert) All things being equal, it’s more productive for me to blog here than to sit on the couch on a Saturday and take a nap while watching college football. Though there’s nothing wrong with any combination of naps and college football. It’s also better for me to blog than to play video games; or browse the gaming subreddits; or watch someone on twitch.tv live streaming while they play video games; or best of all watch someone on twitch.tv live streaming while they browse the gaming subreddits, which frees you from the chore of browsing the internet yourself. You should probably visit that hyperlink, because it’s just perfect. It’s like watching Inception if Inception featured laziness as its major theme. It just makes sense. Go watch Inception, and go click that link. ## I’m not a super expert genius ninja samurai ZeroCool hacker If it appears that I’m presenting myself as an authority on any topic, make sure I back it up with personal experience. If I don’t have the personal experience to back up my claims, take my argument for what it is: an unsupported opinion. I know that I’m not an expert, and when writing blog posts my self-image doesn’t change—but maybe here on the internet, where they don’t know you’re not a dog, you don’t read my posts the way I intend for you to. I’m not an expert, but if it so happens I am, I’ll tell you why. This is a good rule in general. Given blog posts aren’t built off of months of investigative journalism or academic research, the best blog posts are harvested from personal experience (as opposed to blog posts written by pundits with no experience). And let me draw one more point from this: a lot of .NET experts aren’t experts either on the subjects they write about. They are no more an expert, no more experienced, no more capable and have no better software development experience than you or me. They’re just people like you or me with better communication skills. With that said, some of them are true experts. The difference between a good blog post and a great blog post is, in my opinion, the great blog posts are harvested from years of painful experience. Compare this great blog post to my post on the same subject, but clearly written from a newbie’s perspective for an example of this in action. One additional point I’d like to make is that I feel like I’ve crested the hill and I get it now. Software development is a known problem for me. I’m comfortable with the things I know, and I’m comfortable not knowing the things I’m fuzzy about and still working on (see: estimation; finding out what the customer wants), and I’m comfortable with the fact that I may never learn Haskell, or SmallTalk, or BizTalk, or Joomla. This greater sense of perspective wasn’t always how I was, and I get the idea that most of the working world is full of people who don’t get it yet. So yet another of my part-time crusades is to get everyone up to speed, at least to the point where they get it. I’ve met people who (without some help) will remain forever behind, forever…for lack of a better word: incompetent. And I don’t see my “getting-it-ness” as unique expertise but simply what all software developers should have. I look around and I don’t see that…getting-it-ness. Find me a better word. I can’t write more explanatory text right now without repeating myself. ## I will make every effort not to blog work arguments, or be passive-aggressive in general My theory is most blog posts spring forth from blog arguments or work frustrations, as I feel this urge to blog work arguments from time to time. If I won’t say it in person, I shouldn’t say it on the blog. And even if I say it in person, some work arguments should be kept in the family. Every now and then I step out of bounds. ## And finally, this stuff just doesn’t matter Software development is not important in the grand scheme of things. Being a bad software developer in and of itself does not make you morally inferior. To pick on something specifically: software craftsmanship is not a new morality, whereby you are righteous (professional?) if you write clean code and unrighteous (unprofessional?) if you don’t. Depending on the bigger picture, and I place emphasis on the phrase bigger picture, you may be doing serious harm by e.g. overdosing radiation therapy patients via your software, or more likely, putting your company out of business because of your incompetence—but in and of itself, being a bad (worse than average?) software developer isn’t evil. This stuff just doesn’t matter. Every post I write, no matter how passionate I may sound, no matter if in truth I get carried away and lose perspective and start believing it, this stuff just doesn’t matter. Friday, November 11, 2011 11:07:42 PM UTC | Comments [1] | Trackback Friday, September 16, 2011 3:44:59 AM UTC I’ve posted the conclusion below in bullet point form. If you’re a dirty, filthy blog post skimmer, then head on down to the very bottom. I’ll see you there, fellow skimmer. Microsoft has announced a great number of things at BUILD this week. First among them is the new tablet OS known as Windows 8. It happens to run on top of Windows 7 for now, but it’s clearly a tablet OS. This is early, but it’s spinning around in my head, and I feel like I’ve got to write this somewhere. Consider this your warning. I (and the rest of us as .NET developers) need to answer a question for ourselves, and soon: ### The Big Question As an enterprisey .NET developer with a day job doing non-WinRT-related work, is it worth my time to go out of my way to learn WinRT? ### The Big Answer I don’t know. ### A Longer, More Rambling Answer It’s complicated. On one side, Metro is slick and is clearly, obviously the better way to build apps in Windows going forward. On the other side, the 2005 version of me could have said the exact same thing about WPF, and a little before that, WinForms. Actually hey, let’s try doing a little Microsoft Framework Mad Libs and replace “WinRT” with older technologies. Here we go: ### Microsoft UI Technologies mad libs MAD LIBS 2002 edition: WinForms is slick and is clearly, obviously the better way to build apps in Windows going forward! And check out that designer! MAD LIBS 2005 edition: WPF is slick and is clearly, obviously the better way to build apps in Windows going forward! It’s one of the unmovable, unshakeable, eternal pillars of Longhorn! And check out this cool designer called Sparkle! But don’t worry, graphic designers will do all the designing for us in Sparkle! It’s a new era! There’s also this sweet thing called “Windows Marketplace” where you can hock your apps! What’s that? WinForms? Well, it will still be supported, but you can mentally flush everything you know about WinForms down the drain. Unless you’re unlucky and stuck with a WinForms project, in which case…I guess it’s a good thing you know WinForms already. MAD LIBS 2007-2008-2009-2010-ish edition: Silverlight is slick and is clearly, obviously the better way to build apps in Windows and the web going forward! WPF? Well, Silverlight uses XAML too! It’s like WPF, only less of it. Check out NetFlix! Oh, that isn’t really an application. Well, just trust me, it’s the future. MAD LIBS 2011 edition: WinRT is slick and is clearly, obviously the better way to build apps in Windows going forward! Check out these sweet free tablets! There’s going to be an app store! Windows Marketplace? What? Oh, no one used that, it shipped with Vista. Don’t worry about it. This new app store is called, wait a minute, yeah. It’s still Windows Marketplace I think. Silverlight? Well, we’re not calling it that, no, but, there seems to be a lot of Silverlight here. But it’s not running on .NET, either the DLR or the CLR. We’re not sure yet*. But what’s clear is, WPF is no longer needed—remember how sluggish it was? Oh, are we not allowed to mention that yet? Ask me about performance next year. Maybe we’ll talk about performance then, if I can bend the messaging such that I am praising how good WinRT is in comparison to WPF. Designer? *really, I’m not sure yet. WinRT most resembles Silverlight. Check out Rob (of Caliburn.Micro, and you should know what Caliburn.Micro is), he seems to be doing self-directed digging on WinRT and is on fire on the twitter. ### Back on track Ok. What I’m trying to say in the mad libs above is that you can’t trust Microsoft to stick with anything. You just can’t. Everything sounds great right now, and yes, I do believe Metro is cool and slick and I could theoretically make sweet sweet tablet apps with it. Period. But. Comma. But, I can’t trust them. The Longhorn demos were really, really good. I don’t remember reading anyone talking bad about WPF at the time. Sorry if I missed out, but I just don’t remember it. We all loved it. And what wasn’t to love? WPF is the future. Right? Remember the three pillars of Longhorn? As someone else pointed out on Twitter, remember the Office ribbon we were all going to put in all of our apps? Remember data access strategies? The Oslo hype? OSLO? OSLO!!!!! Remember (dare I say it) app development in SharePoint? Disclaimer:I still like it as an intranet platform, a collaboration (power user) platform, and like it better than the more expensive/more enterprisey alternatives. Sorry guys. ### And let’s focus on viability of the platform, not the viability of the tools EDIT 2011-09-16: ninja edited this section to make complete sentences and generally wash away up-too-late-at-night-brain flavor. And allow me to pre-emptively eliminate one common argument, since I’ve seen it crop up a lot in Windows Phone-land. Okay. The Windows Phone has, by almost all accounts, a relatively good development platform. By mobile platform standards, it’s good. It’s probably* the easiest way to build simple apps for a phone. With that out of the way, who’s buying Windows Phone apps from the app store today? And who’s paying you to develop a Windows Phone app? The vague, roughly accurate answer is no one. So let’s not go and try to frame the entire discussion as a developer tool comparison. Tools matter, but a viable platform doesn’t necessarily have to have the best tools, and more importantly, good tools don’t guarantee a viable platform. A perfect case-in-point is WebOS. I’ve heard good things about WebOS development. WebOS, for those of you not paying attention, is the platform that is now completely, 100% dead and represents a heavy loss of learning investment. So to say this plainly, even if the tooling story is good, WinRT may already be circling the drain. ### If you’re going to jump into Windows RT “whole hog”, the time is now Let me try and focus this long, rambling answer into a focused discussion of cost (learning investment) versus reward. If you learn WinRT now and it indeed turns out to be the future, you can end up like Josh Smith did with his WPF knowledge. Oops, wrong link, Josh Smith and WPF. Sorry about the confusion, I thought he had refocused on iPhone development there for a second. Must have been someone else. Anyway, if you bet heavily on a platform, you’ll end up an expert, and hopefully that kind of early and deep expertise translates into more tangible rewards somehow. As an additional bonus, outside of developing expertise for its own sake or for the sake of raising your value of your time to employers, there may or may not be an early gold rush for WinRT tablet apps. You heard it here first: The WinRT app gold rush. Now. If you wait, you are potentially missing out on your chance to make$2000 a month writing games for cats. That wasn’t in my original “gold rush” linkblog post, but I think it’s important enough to note that people are spending $2000 a month buying iPad apps for…cats. For cats! FOR CATS. ### Time to wrap this up I still don’t have the answer, but I feel better. If you tl;dr skimmed my entire post, let me summarize it as follows: • BUILD announced the Windows tablet developer framework called WinRT. There is a whiff of a hint (though I may be way off, someone confirm this) that WinRT may eventually be the development platform for Windows Phone. Unconfirmed. • I am deciding whether to go above and beyond and try and really get into this whole Windows tablet thing. At this time I don’t know. • The tablet has a lot of nice features and from all appearances, looks like it will be a success. • But so did WPF back in 2005-ish. • If I’m going to get into really learning tablet development as a sort of expertise, I should do it now, as there are both “gold rush” benefits and “deep expert” benefits. • But if it dies altogether, I will have essentially wasted any effort learning it. • Let’s talk about how this can end up: • Worst case: WinRT limps along for a few years and I am never able to a) use it on a work project, b) create a successful app with it. Hundreds (or maybe thousands) of hours are wasted learning WinRT minutia. • Best case: I elevate myself above commodity .NET developer. There is an almost unlimited best case. Bill Gates shows up on my doorstep to personally deliver a bag of money (though it’s certainly not all about money). • More reasonable best case: I have a lot of fun building tablet apps, get paid, and only enhance my .NET/Microsoft-guy skillset in the process. • Worst case 2: I blog about “The Decision” deciding to go “whole hog”, then get lazy and do nothing. See you next year. Currently at laziness DEFCON 4. Or laziness threat level orange. This means you’re going to have to go through the full body scanner to detect hidden laziness about your person whenever you’re at the airport now. I’m already on “the list” for known potential threats of laziness. And let me be clear, I’m not choosing whether to read a blog post here and there, maybe watch a screencast, buy a book and not read it (most of my tech book reviews are as follows: Minty smell! Excellent binding. Looks and feels heavy.) I’m not deciding whether to dip my toe in to test the water, I’m choosing whether to jetpack cannonball jump off a cliff/dedicate most of my available “non-work dev time” to this. So it’s one of those “The Decision” moments, albeit no one cares about my decision. You get the idea. And it’s only been a few days since the announcement. I don’t have to make the decision today. I can let the marketing funk that is BUILD (that has permeated every nook and cranny of the .NET community) wash out of my stinky, marketing-funk-permeated clothes. Maybe give them a double-wash, hang ‘em up and let ‘em flap in the breeze for a while. But maybe, I’ll discover a faint discoloration on the sleeve. Maybe I’ll discover that after the marketing funk has washed away, a metaphorical grape juice stain of opportunity remains. As a final note, I will only say you can be thankful I wrote this so you can enjoy twitter again. I apologize for the last few days of nonstop #Win8 tweets, and you’re welcome. Categories: .NET Technorati: Friday, September 16, 2011 3:44:59 AM UTC | Comments [1] | Trackback Friday, August 19, 2011 5:32:02 AM UTC Having just lost my previous Windows 7 to what I hope is a freak accident that will never recur, and subsequently having reinstalled Windows 7 from scratch, the list of customizations and programs I install on Windows 7 is a particularly fresh memory. This is a .NET developer-oriented build and some of the things I do may not make sense to you. Hopefully one of these tidbits may prove useful to you. ### Windows Customizations 1. Set your keyboard repeat rate to the fastest setting. You’re not your arthritic grandmother, and you can handle the extra speed. I wrote six full paragraphs about this subject in 2007, so if you’re curious as to why you’d make this change, well, I explain keyboard repeat rates in as much detail as anyone else ever has or ever will. I even introduce a keyboard repeat-rate mascot! 2. Make the same changes to Windows Explorer you’ve made a thousand times before, and will make a thousand times again: 3. I’m a little crazy, so I have created local accounts for my ASP.NET app pool and SQL Server service account. I know, it’s a little unhealthy. 1. Get to Computer Management and from there, create your service accounts accounts. 2. Now that you’ve created these accounts, they unfortunately show up on your Windows login screen. Clutter! To hide these service accounts from the login screen, follow these instructions. No, I am not bothering putting together a PowerShell script to hide them—tag you’re it. 4. Now for the dumb optional parts I do: 1. Change Windows to the puke green I’ve demonstrated above, or if you don’t like my (delightful!) shade of puke green, feel free to choose your own shade of puke green. Your shade of puke green is clearly superior, I admit. To do this, hit the Windows key to bring up the Start Menu and type “glass” into the search bar. 2. Change the Windows login screen. Choose something like this little piece of awesomeness for your login screen. Let the haters hate (and trust me, they will hate, often). 3. For a little extra class, change your Windows login picture to be your avatar. Do it especially if your avatar is as awe-inspiring as mine. It won’t be, but you can try your best (and fail). ### Windows Features to install To bring up the “Windows Features” dialog, hit the Windows key to bring up the Start Menu and type “windows features” into the search bar. • Pretty much everything resembling the letters “I”, “I”, “S”. Everything IIS, just install it. Don’t install FTP. Note that even if you don’t want to install the server, all the management tools and PowerShell cmdlets are installed here too. • Telnet client – Telnet is admittedly horribly insecure, and you should use something more secure. But, I need this telnet client every blue moon to test raw TCP connections to SMTP servers or SQL servers. And yes, I know, there’s PuTTY. ### Programs to install Some of these will pass without explanation. E.g. it’s Firefox, you use it for browsing, no further explanation should be needed. 1. Mozilla Firefox 2. Google Chrome Beta – along with being an excellent browser, Google Chrome is also now my favorite PDF Reader. That’s right: no more Adobe Acrobat, no more FoxIt, no more PDF reader we all moved to when FoxIt turned into Acrobat. Just associate PDFs with Google Chrome. Now, the problem with associating PDFs with Chrome is that you can’t find that pesky Chrome install! 1. To find the Chrome .exe file, the key is to understand that Chrome installs itself in your user profile, not in the traditional “Program Files” location. Without further ado, paste this in your Explorer address bar when prompted to browse for an EXE to associate with PDF: %LOCALAPPDATA%\Google\Chrome\Application 3. Sysinternals Suite – I follow Mr. Rogers advice and make pretend there’s an installer for this, and manually copy this into my C:\Program Files\ folder. I don’t know what most of these do, but Process Explorer (procexp.exe) is a totally tubular Task Manager replacement. Use it as such. I keep Process Explorer running at all times in my system tray and it lets me know when my computer is slow. That sounds trite, but it’s true. It helps to know that I’m not going crazy and my computer is in fact slow. 4. Git for Windows – Word got out early that git doesn’t work on Windows. As of 2011-08-18, that’s a lying lie from a liar, who lies, from whom lies spew forth. Lies. Git works great on Windows now, and has a painless installer. Download as instructed below: 5. Paint.NET – honestly, Windows 7’s paint has improved considerably, even to the point where maybe you don’t need to install Paint.NET anymore. But, I’m now a master of Paint.NET and must have it! With it I’ve created the screenshot masterpieces you see above, among other masterpieces such as this timeless masterpiece which is a master work of mastery and a masterpiece. Masterpiece. 6. Pidgin for IM, assuming you aren’t labeled a corporate security VIOLATOR by running CATEGORY:UNAPPROVED SOFTWARE – this is the only unobtrusive IM client left. If you (like me) can’t help but look at the ads in all 3 places in MSN Messenger, and don’t like Digsby, well, I guess you’ll like Pidgin. Warning: if there’s a problem with your IM connection or with adding friends, blame Pidgin. I’ve had problems. Even with the need for random reinstalls and short jaunts to MSN Messenger to add friends, It’s still worth it to me to use Pidgin for everyday use. 7. Nothing says “Windows developer” quite like a Ubuntu VM running inside VirtualBox. I will take this opportunity to point out VirtualBox is free for non-commercial use. So far, so good. I want to emphasize that a) my Ubuntu VM cold boots in 5 seconds or so, and saves or restores a running VM also in about 5 seconds. It’s really, really, really fast, and runs comfortably with 2GB of RAM allocated to it. Disclaimer: I’m running on an SSD and it’s fast. Envy me. 1. Once you get the VM installed, you must install the VirtualBox utilities, which notably install the flexible, virtual driver that lets you resize your Ubuntu window anytime. Without them, you’ll have a horrible experience and run in a tiny porthole. 2. Note that anytime you update your Ubuntu install, you will have to reinstall the VirtualBox utilities to again get minimally bearable display drivers. I am not sure I care why. 8. Skype + headset: If you haven’t been paying attention to Skype recently, it’s both getting bloated and awesome. I’ll just focus on the awesome part today: with Skype, you can make a landline-quality voice call over the internet, plus screen sharing, for free. In case you didn’t get that, I said Categories: .NET | Awesomeness Technorati: | Friday, August 19, 2011 5:32:02 AM UTC | Comments [2] | Trackback Thursday, August 18, 2011 2:05:10 AM UTC ### Fixing the “The customHostSpecified attribute is not supported for Windows Forms applications.” error This one’s for the search engines. Sorry folks, none of my recent posts are readable by humans. Too bad. Quick summary of what I did to fix the problem: 1. Changed our MSBuild file ToolVersion property to 4.0. This changes the behavior of the GetFrameworkSdkPath operation, which tells us where to find the Windows SDK folder (which hosts mage.exe, which performs secret ClickOnce magic). Previously (before changing the ToolVersion to 4.0) it pointed to the v6 SDK; now it points to the v7 SDK. Quick note to help you understand #2 below: we store this path in a variable called SdkPath. 2. Changed the MSBuild variable containing the path to mage.exe to point to (note the added text):$(SdkPath)bin\NETFX 4.0 Tools\mage.exe
We no longer just point to the bin\ folder, as bin\ still contains the .NET 3.5 version of mage.exe. The .NET 4.0 version is apparently housed in the “bin\NETFX 4.0 Tools” subfolder.

Thanks to this thread on MSDN forums for the tip. The troubleshooting exhibited in that thread is something of a comedy of errors, but eventually someone posted the correct solution, and for that I thank you.

Categories: .NET
Technorati:
Thursday, August 18, 2011 2:05:10 AM UTC       |  Comments [1]  |  Trackback
Thursday, August 11, 2011 4:59:54 AM UTC

### A warning

This post in its entirety isn’t readable by humans. I’m sorry. I started by picking out a few psake scripts here and there, figuring hey. I’ll pick one or two examples and talk about what they’re doing.

The problem with writing a blog post about build scripts is it’s pretty boring. No one idly browsing their feed reader makes it through an entire post without being knocked unconscious. Ooh, that reminds me: if you’re currently operating heavy machinery or piloting a jet plane, for your safety please stop reading this blog post. Thanks.

But. But, even though it’s well known that this kind of stuff is boring to read about, I still want to collect all the knowledge on this earth related to psake and how people are using it. And I’ve done that below (at least as of 2011-08-10).

Unfortunately for you, my dear reader, I’ve made no attempt to process my raw data collection into something readable, what with sentences, paragraphs, code samples and topical grouping. That takes way too long. I’m too lazy for that.

Instead, I’m linkblogging a clump of psake scripts and mentioning what pieces you may want to steal for your own build script.

As a bonus (and because it’s part of what I’m researching), I’ve included a bunch of links to deployment-related blog posts and deployment scripts. These things are gold, and despite their seeming tinyness and insignificance, represent hours of sweat and toil.

### Don’t Read This Blog Post – Search It

So I don’t expect anyone to, you know, read this post. But, if you’re like me, you’ll find that when it comes time to, say, add a NUnit test runner to your build script, or say, deploy to a remote IIS server, you’ll fire up your handy browser search (CTRL+F) and go looking for a script.

### Well, maybe go ahead and read when I tell you to pay attention

A few places where I think a build script has done something novel, I’ll put a small note telling you to pay attention. It’s not meant to be insulting, but a way to un-zombify your brain so that you actually read that bullet point—so that it stands out from the endless sea of text and bullet points. I know, I could take the time to blog an entire post about each one of these points, and maybe I will. But for now, bet on my laziness and assume I won’t, and pay a little extra attention to how these folk put together their build scripts.

It’s like the famous quote from Passenger 57: “You ever play roulette? Always bet on Peter being lazy.” –Wesley Snipes, Passenger 57, word-for-word quote

Now that you’re mentally prepared for the hail of bullets that is to follow (bullet points, that is), have at it.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

JP Boodhoo wrote the first* non-trivial publicly-available psake script, and thus you’ll notice all the other scripts have borrowed bits and pieces from his script (particularly the ruby_style_naming_convention which_is_not_camel_case like_PowerShell_should_be):
*that I remember

• build script
• He is the only person who doesn’t rely on Solutions/Project files to compile his project, instead relying on aspnet_compiler.exe. Note, for those of you unaware, if you set the OutDir parameter for MSBuild, it will compile web application projects with surprisingly pleasant results.
• He has written his own miniature database migration tool using only PowerShell. Not bad if I do say so myself.
• He makes clever use of “dir” to lazily find all files he needs to compile (e.g. “dir * -include *.cs -recurse")

Ayende’s scripts:

• Rhino ESB – default.ps1 and psake-ext.ps1
• Compiles by running MSBuild on the .sln file
• Packages with the NuGet.exe command-line tool
• Zips files using the 7zip (7za.exe) command-line tool
• Runs XUnit tests via xunit.console.clr4.exe
• Generates AssemblyInfo.cs (which, if you’re unaware, is where you get your assembly version number from)
• Pulls the desired version number from Git source control using the git.exe command-line tool
• RavenDB – default.ps1 and psake-ext.ps1
• Neat way to check for installed software (prerequisites)—this checks to ensure you have .NET 4.0 installed (see the “Verify40” task)
• Runs a complex test scenario in the “TestSilverlight” task—it fires up a local Raven server in RAM, runs Silverlight-related unit tests, then kills the Raven server.
• Packages files from disparate sources—RavenDB shows how it’s done. Hint: it’s not pretty.
• Zips files using the zip.exe command-line tool (i.e., not the same tool at 7zip)
• Builds what appears to be an intense NuGet package
• Note the simple build instructions found here
• Texo (his jokingly/admittedly-NIH PowerShell Continuous Integration server)
• builder.ps1
• Sends email
• Tries to get latest on a git branch via raw git.exe commands

DotLess:

• default.ps1
• Compiles by running MSBuild on the .csproj files
• Runs ILMerge
• Builds a gem (as in, RubyGems gem)
• Builds a NuGet package

LINQToEPiServer:

• default.ps1
• Compiles by running MSBuild on the .sln file
• Starts the MSDTC service (SQL Server distributed transactions) using net start
• Does extreme funkiness with NUnit impersonating MSTest…I have no idea why.
• Modifies all config files with a simple homebrew templating engine (think string.format’s {0} {1} etc.).

CodeCampServer:

• psake.bat
• A pretty good psake launcher that does everything you need to run the build script, plus highlights failed builds.
• default.ps1
• Compiles by running MSBuild on the .sln file
• Includes a large number of helper functions. Pay attention to the fact that in psake, you don’t have to use tasks for everything—by all means write first-class functions that accept arguments! Arguments! They’re awesome! Use them!
• Runs Tarantino (database migration tool)
• Runs FXCop and something called “SourceMonitor”
• Runs NUnit both with and without NCover code coverage metrics
• Zips whole directories
• nant.build
• I know this has nothing to do with psake, but there’s a lot of stuff in there. A lot of the command-line call-outs can be converted to your needs.
• Deployment helper functions nicely packaged into PowerShell module files (psm1)
• Database.psm1 - Uses .NET’s SMO objects(?) to interact with SQL Server
• Creates SQL Server user (an Integrated user, not a native SQL user) on the SQL instance and on the SQL database
• Does something scary-looking that appears to export an entire database, but not the way you’re thinking—not the normal way of exporting a database.
• Package.psm1 – Uses a COM object called “shell.application” to Zip a directory
• Unlike my (and everyone else’s) implementation, this zip function makes use of object piping to receive the list of files. Nice.
• ScheduledJobs.psm1 – Uses a COM object “schedule.service” to manipulate Windows Scheduled Tasks
• Creates a new scheduled task.
• Windows.psm1 – Uses PowerShell’s WMI support to create local (not domain) users and assigns users to groups.
• Creates a local user on the machine
• Adds a user to a local group
• IIS.psm1 – uses the “WebAdministration” IIS cmdlets to manipulate IIS
• Creates an IIS website object and actually sets the bindings successfully (yessssssssss).

Aaron Weiker’s blog series

• sample psake script from his blog post
• Compiles by running MSBuild on the .sln file
• Configures app.configs with environment-specific modifications using XPath (i.e. a lot more like the NAnt/MSBuild helpers, and less hacky than doing string search & replace)
• Runs RoboCopy
• One neat thing I haven’t started doing, but desperately need to start doing, is to start throwing exceptions if script/function parameters are not passed in. Pay attention and see lines #1-4 of his psake script to see what I mean by this. I’ve lost hours of my life I will never get back troubleshooting PowerShell scripts over the years only to find that I passed in a paramter called “-name” when I needed to pass in a parameter called “-fullname”. So, if you don’t do this either, start doing it.

Darrel Mozingo’s blog series

• sample psake script from his blog post
• Compiles by running MSBuild on the .sln file
• Runs NCover and NCoverExplorer
• Includes helper methods that won’t make any sense to you until you actually use PowerShell and are annoyed by the same things that caused him to write those one-line helper methods. Pay attention to the little things he does in his helper methods that you probably think are fluff. Pop quiz: why did he write a create_directory helper method? I’ve experienced the pain and know the answer. If you haven’t, take my and his word for it and at least attempt to figure out why those helper methods exist.
• Four-part series on deployment with PowerShell (1, 2, 3, 4)
• Part 2:
• Modifies web.config via PowerShell’s built-in [xml] object wrapper (but only making a minor edit)
• Pre-compiles the ASP.NET site
• Writes a CPS-style (CPS-style-style? I feel better now.) function that maps a network share, yields to the caller, then unmaps when done.
• Takes a configuration backup of the live ASP.NET site
• Part 3:
• Remotely manages IIS via PowerShell remoting (starting & stopping IIS)
• Part 4:
• Rewrites the system hosts file
• Tests current DNS settings (cool!)
• Loads Internet Explorer to ping the website to force it to compile itself
• Verifies emails are being sent (so hot!)

A blog series

• psake script and run.bat (download demo.zip linked from this site if you want to see the raw psake script)
• The run.bat sample does something novel—pay attention to how it loads PowerShell as a shell (REPL environment), not as a run-and-exit script. Smooth.
• IIS adminstration via a mix of IIS (“WebAdministration”) cmdlets and WMI. Smooth. Creates a website and a new AppPool.

Señor Hanselman apparently wrote a whitepaper about deploying with PowerShell

• Gets latest from a SVN repository via a .NET SVN library
• Does heap big remoting work pre-PowerShell 2.0 (i.e. ,before PowerShell had any built-in remoting support)

Mikael Lundin (litemedia) blogged

I should mention Derick Bailey’s Albacore project for .NET – it’s a collection of Rake (Ruby) tasks that are the equivalent of a lot of what I’ve listed above. And from what I’ve seen, it has some things I haven’t covered above. Here’s list of things it does, machine-gun-style:

• csc.exe, docu, FluentMigrator, MSBuild, MSpec, MSTest, NAnt, NChurn, NCover, NDepend, NUnit, NuSpec, Plink, SpecFlow, Sqlcmd, zip/unzip, XBuild, XUnit.
Categories: .NET | PowerShell
Technorati:  |
Thursday, August 11, 2011 4:59:54 AM UTC       |  Comments [2]  |  Trackback
Tuesday, August 09, 2011 5:40:45 PM UTC

I’m unmotivated today at work, partly because I’m switching us from MSTest to NUnit. I’ll be happy again once it’s done, but not until then.

With that in mind, I’m ready to give the second half of my “using TFS as a CI server” advice, borne out of my experience on a real team project running TFS as our CI server.

This one’s going to be less positive than my Using TFS as your CI Server part one, and if you’re not in the mood to read, I’ll just summarize:

• Don’t use MSTest as your unit testing framework, and
• If forced to use TFS 2010 as your CI server, minimize your exposure to the XAML build script, instead delegating your entire build script to PowerShell or MSBuild or whatever else tickles your fancy. Don’t use TFS 2010 Build XAML, it isn’t worth the effort to set up a real build entirely written in Workflow Activities. It’s probably possible, but not worth the effort.

### Switch to NUnit: The MSTest test runner is non-deterministic and will do great harm to your CI experience

We’ve had serious problems getting consistent results out of our MSTest test runs for our two projects. Turning off various features (such as code coverage) has helped some, but not enough. It’s worth your effort to  switch to NUnit if you’re serious about doing unit testing. Sorry MSTest, I tried, but the test runner fails way too often.

For nitpickers, you don’t have to switch to NUnit. You could switch to anything.

### Switch to NUnit: MSTest leaks memory and cannot support our test runs

This isn’t as important as the failing test run. It is important if it ever happens to you and you have to rework your test runs such that you don’t run out of memory any more. I’ve searched and we’re not the only people running into this problem.

### I hate Windows Workflow Foundation, and Windows Workflow Foundation hates me

Ayende ruined me with his JFHCI series of blog posts (I blogged about the topic here). After being enlightened to the fact that code (or if you prefer, script) is better in every way* over XML configuration, I’m ruined on ever using Workflow Foundation for anything. Ever.
*exaggeration

With that in mind, I’m not a fan of the reworked TFS 2010 XAML build system. However, this post is only the positive takeaways, so I shouldn’t get carried away talking about the build system, and instead talk about what you should do when told to set up your build in something called “xaml”.

### TFS 2010 Build XAML is a V1 Microsoft Product

Some of you are not going to like this, but: avoid the XAML.

Avoid the XAML build system. It takes a long time to test build scripts, it is painful and the designer is buggy, it has almost no built-in Workflow Activities (e.g. there is no a “copy file” activity), it is harder to follow, harder to modify, painful to use with multiple branches sharing the same build XAML. PowerShell’s REPL shortens the feedback loop to something like 10 seconds, and MSBuild and NAnt can be configured such that you get feedback within a few seconds as well. TFS Build’s feedback loop is something like 10+ minutes, depending on how long your entire build takes.

To be clear, the TFS Build feedback loop is as follows:

• Save, wait 10+ seconds for the save operation to complete.
• Navigate to the Source Control WIndow, check in the XAML file in the BuildProcessTemplates folder.
• Navigate to the Team Explorer and kick off a build manually.
• Wait until the build completes.
• Open the build summary for the build you completed.

### Takeaway: minimize your XAML exposure

My preferred method of avoiding the XAML build system is to call out to PowerShell immediately for your entire build script. I’m serious—don’t even try to build your entire build script in the XAML designer.

This blog post explains how to call PowerShell from TFS. I’m not giving you the full solution, because working with TFS build is demotivating and I don’t want to spend any more time than is necessary here, but I’ll link to a partial solution.

Here’s a rough idea of what to do:

• Find a XAML build script for your starting point, delete almost all of it, and add one InvokeProcess activity that calls out to PowerShell.
• Make sure to pass in necessary arguments like SourcesDirectory, BinariesDirectory, etc.
• Put all your compiling, test running, ClickOnce manifest building, packaging, deploying to Dev environment-ing, XML configuration modifications…put all these things in the PowerShell script.
• Investigate psake if you’re serious about doing your build in PowerShell.
• If you’re not a PowerShell fan, by all means call out to MSBuild or NAnt using the InvokeProcess activity. Whatever you do, just don’t try and wrangle with the TFS 2010 build XAML.

It’s worth the extra effort to get the call-out mechanism working, even if it seems like “this is taking longer than it should.”

### Use Arguments for your TFS 2010 Workflow Builds

The one thing I like about the TFS 2010 build system is the concept of workflow arguments, wherein you can change settings “at runtime,” or specifically when queueing up a build. This is particularly good for us if we want to temporarily turn off tests or run a “deploy” build from TFS with certain parameters provided only at runtime. In TeamCity there were a few freetext text boxes that allowed you to type whatever arguments you wanted, but there was no guidance per-se. Nothing to tell you “Our build script is looking for precisely three things: a) the NUnit tools directory (though I’ve provided a default); b) whether or not you want to deploy to the Dev environment; c) whether to run tests.” The TFS 2010 Workflow does exactly this in an extensible way. Nice.

You can set up your “call out to PowerShell/MSBuild/NAnt/whatever” activity to pass any of these runtime-provided arguments as you need them.

### My framework/platform strategy

I have a few basic strategies for using frameworks or platforms or basically anything to do with computers:

• If it works, learn to use it well. For example, Windows 7’s new features/hotkeys/start menu search, Google Reader hotkeys, C# syntax, ReSharper, the commercial ORM we’re using. I’ll generally spend the time it takes to a) learn the product, and b) use it as intended.
• If it doesn’t work, avoid it. For example, Windows Vista’s start menu search—I turned it off completely. The MSTest test runner falls in this category. I am also not a fan of most of the more advanced WPF language features, and don’t use them.

I also react very differently to frameworks I trust and those I don’t trust (i.e. those that “work” and those that “don’t work”):

• If I’m experiencing a problem with a framework I trust, I’ll read up and try to find the correct solution because I’ll assume I’m at fault. Today this means, if I see NUnit’s test runner throw an OutOfMemoryException, I’ll blame us first.
• If I’m experiencing a problem with a framework I don’t trust, I’ll write the dirtiest, quickest workaround available because assume the framework at fault. I learned this lesson the hard way while working on a “quick” SP Workflow project a few years ago. Today this means, if I see MSTest’s test runner throw an OutOfMemoryException, I’ll blame MSTest and switch us to NUnit.

Something I don’t think I’m saying outright is, these labels of “it works” or “it doesn’t work” affect how I deal with everything I do with software. With TFS as a source control solution, I’m dealing with it as

1. A product that works great for SVN-style source control. Edit, merge, commit. Works great. Merge even works as of TFS 2010. Try to figure out why you’re having problems.
2. A product that does not work offline or remotely. Don’t try offline mode, period, and avoid doing heavy TFS work (e.g. moving directories of files around) remotely. Avoid or work around the problem, in other words.
3. A product that branches, painfully. If you experience problems with branching, work around the problem, potentially by losing source control history. I’m okay with losing file history. A lot of people are not okay with that. Branch less, because it’s less painful than dealing with the problems of having too few branches (and boy howdy do we ever need more branches).

With MSTest, I deal with it as

1. A unit test syntax and local test runner that works great (if slow). Learn how to use it properly.
2. An inconsistent CI test runner. Avoid it if possible.

With TFS Build, I deal with it as

1. A bad language/environment for writing build scripts. Avoid it/escape the XAML as soon as possible.
2. A reasonably consistent CI server that is painful to navigate. Learn to use it, and make the conscious choice to lose 5-10 minutes every day to navigating TFS menus, and to allow for confusion given the TFS tray app doesn’t work well and most of the build status UI is confusing and inconsistent. Once you’re consciously okay with losing some time navigating through the menus and closing+reloading build status windows, you stop caring about those 5-10 minutes. It works. If you can’t stop caring about everything, you’ll eventually go crazy. Right?

Did you see the pattern there? I have an internal list in my head of which features I can trust and which ones I can’t trust. This list keeps me sane.

Do others maintain their own internal “I can trust this software” list, or am I just crazy?

Categories: .NET
Technorati:
Tuesday, August 09, 2011 5:40:45 PM UTC       |  Comments [9]  |  Trackback
Wednesday, July 06, 2011 5:01:48 AM UTC

### Microsoft MVPs, all aboard!

…the above list hastily compiled off the top of my head.

### There are no .NET developers

Seriously, the amount of energy being poured into playing catch up is saddening. Imagine if all of that effort was poured into the tool that’s already better at this.

### Takeaways

1. Ruby (Rails) and other non-.NET frameworks are crossing the chasm into the mainstream.
2. Rails is a better platform. Every former .NET developer who has first tried, then written, about Ruby on Rails has reported it’s both more enjoyable and more productive. Every, single, one. EDIT 2011-07-11: ok, maybe I exaggerated. Ken has something to say as a .NET/Ruby guy who still likes .NET as much as/more than Ruby
3. I’m sensing (and feeling) Microsoft’s .NET platform is stagnating, especially recently. Aside from multiple positive reports [1, 2] on the NHibernate rewrite, I have nothing to look forward to in .NET. And while I’m here, let me be the first to say: providing a new platform for Windows development excites me in the same way that iPhone-platform development excites me—that is to say, not at all.
4. You don’t have to self-identify as a .NET developer. Instead, self-identify as a developer whose skillset is in .NET. Learn another platform (which is surprisingly easy) instead of investing extra effort in .NET. I happen to like the WPF project I work on, and my next project will probably be .NET (given my skillset), but there’s no reason I have to assume it will be .NET.

### EDIT 2011-07-14: New Takeaways

There have been many, many comments over what I’ve written. My average blog post gets 0 comments. The median for blog comments here is also 0. The 75% quartile for blog comments: also 0. The 90 percentile mark for blog comments—you guessed it—also 0! So it was something of a shock to see people are actually reading this post, and commenting or blogging responses.

And very few of them seem all that happy with my post.

Many of them assume that I am a Ruby zealot, or that this post was about “Ruby vs. .NET”, so I must have written something poorly above. I don’t know. My new takeaways (which supersede the old list) will hopefully give you a better idea of what I meant to say originally.

It’s important to note the context as well. My blog is mostly targeted at people like me, that is to say, .NET developers, and the people who forgot to unsubscribe when I stopped posting about SharePoint. The post should not categorically offend everybody, no matter what background, but from all the feedback I’m getting: it is.

On with the takeaways:

1. .NET developers (i.e., YOU) should check out Rails. If you are a .NET developer, and you haven’t checked out other frameworks like Ruby on Rails, you should do so. Instead of learning about Silverlight, for example, or whatever v1 Microsoft product that comes out of BUILD, or waste your time studying for MS certifications (seriously?), check out Rails. Rails is a viable way do develop web applications and is worth the time investment. Somewhere down the line, you may even be able to get paid to do Rails work, even in a city like Houston, even outside of the startup scene. And, it is surprisingly easy to learn other platforms.
PS—these are not strawmen alternative learning investments I’m setting up. There are real people, real .NET developers, who spend their time struggling through WCF books to take the exam, or go “all in” and study up on the newest MS framework, and never quite get caught up.
2. Drop the “.NET developer” mindset. There is a kind of assumption among .NET developers that we are .NET developers, and will use whatever the .NET framework provides to solve our problems. If we need to develop a web application, for example, we’ll consider ASP.NET WebForms or MVC, or maybe one of the alternate .NET web frameworks. Or SharePoint. We don’t look outside the walls. So, look outside the walls. .NET isn’t as fresh and shiny as it used to appear, and the alternatives are getting quite good (some would say: better, believe it or not). Again, it is surprisingly easy to learn other platforms.
Categories: .NET | Ruby
Technorati:  |
Wednesday, July 06, 2011 5:01:48 AM UTC       |  Comments [17]  |  Trackback
Tuesday, March 22, 2011 3:23:38 PM UTC

This screenshot was taken in the wild by me, at my computer a while back.

Each Cassini tray icon roughly equals one standard unit of productivity. By this completely unbiased, objective measure, I’m about seven hundred times more productive than you. Give or take.

Categories: .NET | Awesomeness
Technorati:  |
Tuesday, March 22, 2011 3:23:38 PM UTC       |  Comments [0]  |  Trackback
Friday, March 18, 2011 6:54:18 PM UTC

I’m just here today to pass along a few gems, both relating to OOP.

Categories: .NET
Technorati:
Friday, March 18, 2011 6:54:18 PM UTC       |  Comments [0]  |  Trackback
Wednesday, February 09, 2011 6:00:00 AM UTC

This post is a grab bag of information, techniques, and landmines I wish someone had told me when we first set out to run our build/deploy on top of TFS. What follows is a short, all-positive compilation of everything I’ve learned about TFS 2010. I’ll assume you work with TFS on a daily basis, and thus won’t attempt to explain TFS concepts (shelving, for example).

### All positive

All-positive means that I will not complain about TFS. I will not. I’ll only provide helpful workarounds I’ve found.

### Mini-review of TFS’s continuous integration featureset

Between TeamCity and TFS, having used both in two environments and having recently used the newest versions of both, I’d prefer TeamCity in a landslide. I can fire off out a bulleted list of specific ways TeamCity is better, but in the interest of staying positive, let’s just move on.

### Tips for working with TFS source control

#### Merging

If in doubt, don’t automerge. If you are having problems with TFS merges, you can solve all your problems by manually merging every file.  TFS 2005 was notoriously bad at auto-merging (i.e. performing server-side merges), so the only way to win, was not to play.

In TFS 2010, I will 99% certify that automerging works. TFS 2010 has improved, and our team has had almost perfect success with automerging, though there are hiccups here and there. We’ve had merging issues, but I’m not convinced our merging issues are automerge issues. 99% is pretty good. Let me know in the comments if you can definitely blame TFS 2010 automerge for a botched merge.

Replace your client-side merge tool with one you can trust. The built-in VS2010 merge/compare tool works. However, I had “an incident.” “Incidents” are bad when merging. What happened is, the VS merge tool mistakenly matched up two entirely different methods and attempted to “merge” the changes. Merging the contents of method A() into method B() is bad. It’s bad enough to go looking for a replacement. So, following these instructions, I replaced the built-in merge tool with Perforce’s free P4Merge.

Weird merge conflicts with renamed files? Accept the lesser victory and step a) delete, then step b) re-create any files that cause weird merge issues. This breaks the file’s version history, but solves your bigger problem.

### Workspace tips

I know lots of you have problems with TFS and how it deals with files. I don’t. I don’t know if I’m just not exercising the tool enough, but I’m not having problems, now that I know what not to do. Specific advice follows:

Don’t go offline. The consequences can be worse than you think. I’ve never had success with offline mode, and what’s worse, until you go back online, the Team Explorer hides your TFS server from the list. I’ve had something of a traumatic experience with offline mode, so it’s hard to stay positive or even fake sound positive when describing offline mode. Just don’t go offline unless you know how to get back online. For the record, I’m using the newest of the new with TFS 2010 and Visual Studio 2010, and even so, even with the newest tools I’m experiencing problems. I’ll give a blanket recommendation that you don’t try it.

Do as much editing as possible inside Visual Studio Solutions or Projects. It’s easier to create, edit, move, and delete files inside Solutions or Projects (files inside Solutions are automatically tracked in TFS). Treat Solutions ands Projects as a rail rides: stay in the cart travelling slowly down the rails. Do not exit the cart. In case of fire, follow emergency procedures (spelled out further below).

This goes against advice I’ve heard. I’ve heard that TFS source control is more manageable from the command-line than via Visual Studio. But for me, I prefer to let Visual Studio’s integration automatically check out files for editing. So, whenever possible I work underneath the protective umbrella of Solutions and Projects.

Remember TFS does not automatically track file changes, not even partially like SVN or git. This means:

Explicitly rename or move files in Source Control Explorer.

Explicitly delete files from Source Control Explorer.

Explicitly add files from Source Control Explorer.

Check out files in Source Control Explorer to edit. Or, reworded:

Be sure to check out first, before editing files outside of Visual Studio. This means when running any kind of code generation, generating assembly info files, or even something as simple as editing PowerShell scripts with the ISE—in all cases, be sure to check out first. Then edit. Last, check in (or undo).

If you don’t follow these steps in order, you’ll experience bad things. Notably, if you try to check out after successfully editing a file, you’re presented with a merge conflict.

If something gets weird, destroy your entire folder (or entire Workspace) and get latest+force override. Don’t try to get too specific and get one or two files. Delete the whole folder, then get latest+force override. It’s quick, just do it.

There’s no good way to temporarily edit a file (e.g. temporarily change the connection string in your app.config) without triggering a checkout. If you ever need to temporarily edit a file but don’t ever want to check in the change, well…there’s really no good way to go about it. In fact there are several not-good ways to go about it:

1. Just check out the file and edit it, and simply remember to undo your changes later.
2. Cheat. Open the file via Windows Explorer, and unset its Read-Only flag. And, when you want the file to go back to its original state, simply remember to get the latest version of the file + force override.
3. Cheat. Open a prompt at the root of your workspace and run attrib -r *.* /s. This is the nuclear option, as TFS will now assume you’ve edited every single file in your workspace, and will treat any updates as merge conflicts. Don’t do this. I’ve done it so I can tell you not to try it.

Shelvesets work. Trust them. Use them. Use them frequently. Make as many of them as you want. Give them dumb names if they contain trash (I have shelvesets named “aaaaaaa” and “aaaaaaaaaaaa”, and of course, one named “help”). You can find them later, just sort by date. Easy.

Always shelve from Source Control Explorer to keep things simple. If you Shelve a Solution, you may miss files. I’ve missed files when I shelved a Solution. Don’t be me.

### MSTest tips (specifically: using MSTest for your unit and integration tests)

Switching from NUnit is a cinch. All the attribute names are different, but only slightly. In With the exception of one MSTest feature:

Learn about localtestrun.config and how it works. We’ve started using it, and while it’s convenient, it’s essentially a non-composable* way to copy files you need for your tests.
* i.e. once you start using localtestrun.config, you can’t just switch to NUnit or XUnit without some pain. Alternately, if you had coded up manual file copying, you wouldn’t have any issues converting to or from NUnit/XUnit.Also, the localtestrun.config may be responsible for our extremely slow test runs.

The test runner is excellent…and slow. It’s not entirely the test runner’s fault, as the Resharper test runner is equally slow (I’ve tried). I gave up using Resharper’s test runner when I found out it was just as slow, and has imperfect (broken) localtestrun.config support. Note, tests are slower by a large factor if you’re running code coverage.

If not for the slowness and the occasional bug and some wonky behavior during debug sessions, I’d say the VS test runner is as good as Resharper’s test runner or TestDriven.NET. Short note about TestDriven.NET: like VirtualBox, it’s not free for corporate use. Read the EULA.

IntelliTrace sounds nice, but crashes test runs. We turned it off so it stopped crashing our test runs.

Learn the keystrokes for running tests. CTRL+R, t. CTRL+R, a. Similar key chords to run tests with the debugger attached. If you forget the keystrokes, go to the Test->Run menu and they’re listed there. Just memorize them.

Ignore most of the Visual Studio testing features. They do not help you write unit or integration tests. Specifically:

• Create Unit Test (“Unit Test…” as seen in the screenshot above) in particular will only mislead you. The other tests (e.g. Coded UI Tests) are useful in other contexts, but I can’t think of any situation for which the “Create Unit Test” dialog is useful.

Start from an empty class when writing new unit tests. While the “Basic Unit Test” template works (and is an excellent tool to help you learn the MSTest attributes), a clean file is better.  Apply YAGNI. You don’t need a TestInitialize or ClassInitialize method yet, so don’t add them. You can add either of them later, if you need them. For now, leave them out. YAGNI. This is what one of my new test classes look like:

*note: I am using this naming convention presently. It’s not so bad. We add the class name prefix to the method name (UrlHelper_) so test results can be sorted and understood and so there aren’t hundreds of “When_etc” names in a row. And yes, I’m aware that you can add columns, specifically the test class’s full name, to the test results display, but it’s not a first-class citizen and doesn’t help when running tests in the build. Stay on the rails and just embed the class name in your test method. Side note: sometimes I need to split out my test classes to support more than one test fixture (context) per class-under-test, and I do so. Read up on test fixtures and class-per-fixture if you’re intrigued as to why I’d want such a thing.

If you have a bad test portfolio (i.e. “our tests suck”), it’s not MSTest’s fault. Using NUnit, XUnit, MSpec, or any of the (literally 20 or so) .NET BDD frameworks will not help you if you don’t have the basics. MSTest is indeed limiting in some ways, but I’m far more limited by my coding/design knowledge than MSTest itself. With not much extra effort, you can be successful with MSTest. So, now that we’re not blaming MSTest, how do we improve our bad tests?

Learn about unit testing, integration testing, acceptance testing, ATDD, BDD, design by example, context/specification, behavior testing, UI testing, “subcutaneous” UI testing, functional testing, end-to-end tests, fast/slow tests, design tests, outside-in tests, mocks, stubs, fakes, doubles, what to test, what not to test, when to delete tests, when to apply DRY to your tests and when not, how much to maintain your tests, how to organize your tests, the misrepresentation of test fixtures as TestClasses, using automocking tools, using IoC with tests, using object mothers, using test builders. Dealing with threading. Using SQLite in-memory with your ORM to speed up your integration tests. I can’t tell you how to write your tests or why today. Everyone else (“the entire internet combined”) can.

Troubleshooting: if a test just flat-out isn’t running, find it in the test list (Test->Windows->Test List Editor) and ensure it is enabled. Disabled tests just don’t run. MSTest allows you to disable tests via the test lists view, presumably because … I don’t know why. But it can be done, and it’s really weird when someone does it and I can’t run a test and I don’t know why.

I don’t know what to do with the .vsmdi file either. Check it in and try to pretend it doesn’t exist. It stores such things as the mysterious “Test Is Enabled” flag, and details for any test lists you may have, and all of these wonderful things. If you accidentally break the vsmdi file after checking it in, use the power and magic of source control and revert changes.

Related: If you need to disable a test, use the [Ignore] attribute like every other framework. Don’t argue, just do it.

Related: I haven’t found a use for test lists. I’m applying YAGNI and ignoring them until I can figure out how to use them. Don’t use test lists unless you know why you should.

### TFS as a continuous integration server

First, let me define build machine as the computer on which your TFS build agent runs. Bueno. Let’s get rolling.

Turn off code coverage? According to several blog posts (here’s one), if your build fails because “The process cannot access the file ‘data.coverage' because it is being used by another process.”, then you need to turn off code coverage.

On your build machine, restart the build agent every evening to prevent slowness caused by memory leaks. Don’t argue, just do it, particularly if you notice your builds slow down after a while. If you’re horrified by the thought of restarting services as a rule, you should look into the wealth of options IIS provides to restart unhealthy app pools. You’re not alone. According to Unix guys, It’s the Windows way. Give in and just run the following script as a Windows Scheduled Task every night:

REM BEGIN BATCH FILE SWEETNESS
REM –=-=-=-=-=-=-=-=-–=-=-=-=-=-=-=-=-–=-=-=-=-=-=-=-=-
net stop TFSBuildServiceHost
net start TFSBuildServiceHost
REM –=-=-=-=-=-=-=-=-–=-=-=-=-=-=-=-=-–=-=-=-=-=-=-=-=-
REM END BATCH FILE SWEETNESS

Run only one build agent at a time, per build machine if you’re running MSTest. If you have one build machine, one build. Two machines, two builds. One per machine. Why, you ask? MSTest aborts test runs if you run two MSTest runs simultaneously. I don’t know why. If you run NUnit or just skip unit tests altogether, you can run more simultaneous builds. But to avoid phantom build failures, don’t cheat and just run one build at a time.

Similarly, don’t log into or log out of the build machine while MSTest is running, as it will abort any running MSTest test run. Seriously. I have a theory as to why this is so, but it doesn’t really matter why. Just know that, if you’re running tests, don’t log in or out.

TFS has a tray app called “Build Notifications”. Use it. It works for notifications, which arrive within a few minutes of build completion. One caveat: unlike TeamCity, you are not notified when a test run begins to fail, but when the test run completes.

The tray app’s build status screen cannot be trusted to be accurate, so leave the tray app alone and just use Visual Studio/Team Explorer to look at your builds. In other words, use the TFS tray app only for the alerts.

Refresh doesn’t work on the build status screen in Visual Studio. It’s buggy and doesn’t properly refresh all the time, sometimes misplacing running or completed builds. To work around this behavior and truly refresh, close and reopen the window.

### Work in progress – part 2 to come

Hello everybody! If I’m ignorant of something that would solve any of the problems I’ve experienced above (notably, speeding up test runs would be GREAT), let me know.

Assuming I get up the gumption, I’m also going to write a second post covering:

• 2 second template chooser workflow
• JFHCI, which has poisoned me against workflow foundation forever and which informs my … am I allowed to use the word philosophy when describing build systems? Let’s go with it: …informs my build system philosophy.
• Preferring a malleable (i.e., code-based, not XML or XAML) build script,
• Hardcoding developer configuration the smart way in my C# project, i.e. where it’s easiest to change
• Minimizing premature configuration and thus, minimizing web.config/app.config file sizes and the nightmare that is XML transforms
• However, using WF where it works: adding build “arguments” for things that you actually do change from build to build. E.g. changing the drop folder, turning off automated deployment to a dev environment.
• Jailbreak from XAML prison:
• Calling out to MSBuild
• Calling out to PowerShell
• Calling out to custom Activities in C# (and why)
Categories: .NET
Technorati:
Wednesday, February 09, 2011 6:00:00 AM UTC       |  Comments [5]  |  Trackback
Wednesday, February 02, 2011 6:00:00 AM UTC

### Bullet point summary, for the skimmers

Your attention is already waning, so I’ll get with the bullet points:

• Runas is useful in surprising ways, including troubleshooting build breakages, security testing, and running as your service account. This is the old, boring runas.
• Runas features the /netonly switch, which makes the impossible possible on VMs and off-domain machines. I’ll save some of the thunder for later.

### Introduction

I feel sorry for everyone who is forced to do their day-to-day work on a corporate machine. It seems that in the last few years, virus scanners have dug their filthy, performance-sapping claws into your network connection, your email, and your (Internet Explorer) browser. All this added to the “scan every file before it’s accessed” behavior we’ve all come to know and love.

On behalf of corporate IT everywhere: you’re welcome.

It’s brutal out there for those of us beholden to the dreaded corporate desktop image. Oops—did I say us? I mean you. You—you’re beholden to whatever IT gives you. I’m living the high life*.
* this is a metaphor

At work we’ve run some tests (literally—we routinely run a pile of integration tests), and my old, busted laptop* is somewhere in the area of five times faster than the new hotness desktops running the corporate Windows XP image. But let’s not belabor the point.
* “What a piece of junk!” “She'll make point five past lightspeed. She may not look like much, but she's got it where it counts, kid. I've made a lot of special modifications myself.”

For those of you reading this on a corporate desktop, thanks for getting this far, but the following blog post probably won’t help you. You’re already on the domain, so you will rarely (if ever) need this trick! Enjoy running Windows XP for another five-to-ten years!

### Now that we’ve gotten rid of the losers

…let’s get on with it. Runas allows you to impersonate another user while running most any Windows app.

For server admins, this means you can run with an unprivileged account for your day-to-day tasks (like waiting patiently while Outlook runs chkdsk on your 4GB PST file) and perform your catastrophic admin mistakes (like accidentally promote a domain controller) inside a management console or command shell running as a Domain Admin. You still make catastrophic mistakes, but not catastrophic mistakes specifically because you’re running a Windows account with full administrative privileges all the time. There’s a whole world of catastrophic mistakes for you to discover and experience as an admin. Moving along.

This Runas behavor is the plain, vanilla Runas , and you can get this behavior by SHIFT+RIGHTCLICKing on pretty much anything in Windows. To make things easier, you can also create shortcuts on your desktop to always prompt you to log in as someone else (AKA “run as” someone else).

For developers, this means you can run SQL Server Management Studio as your app’s service account so you can talk to your test database…your app’s test service account, I’m sure.

Also for developers, you can also launch a browser window as another Windows user. This is a great trick for testing out security on web apps that use Integrated Windows Authentication.

Also also for developers, you can impersonate your build service account to run your build so that your prompt runs 100.0% precisely the way TFS/TeamCity/Whatever runs it, in order to troubleshoot any weird problem with your build. <==THIS IS A LIFESAVER

For SP admins, you can launch browser windows as your farm account or whatever admin account you have, or of course, completely unprivileged accounts to test security trimming.

Let’s see this in action:

Let’s break down what just happened:

1. I ran PowerShell as myself (username “P”). This is evidenced when I interrogate the USERNAME environment variable.
2. From PowerShell, I perform a Runas cmd.exe. This launches the cmd.exe shell.
3. From the impersonated cmd.exe shell, I interrogate the USERNAME environment variable. This shell is running as svc-sql. So smooth.

And yes, I give the SQL instance running on my laptop its own service account…what of it. I’m not crazy.

### But I knew all that already—what if I’m not on the domain?

I’ll bring the thunder, I promise.

First of all, a slight technicality. Wherever I say “domain” in this blog post, I mean “AD forest”. Sometimes being precise with your vocabulary isn’t helpful.

So. There are two major scenarios wherein you need (absolutely NEED) to run as a user on a different domain.

ONE: you’re running a virtual machine running in its own little virtual world on its own virtual domain, and NEED to talk to the real domain, so that you can connect to the test database server and run some queries.

TWO: you’re running on a non-domain laptop running its own brand of non-corporate-imaged bliss, and NEED to talk to the real domain, so that you can connect to the test database server and run some queries. Maybe accidentally DROP some databases while your users are testing.

NEED. This is the face of NEED.

Also, more rarely, if you NEED to connect Microsoft Excel directly to your database server to run a query, but must authenticate with Integrated authentication as another user? And you’re running off-domain? Don’t puke: pivot tables are really, really beautiful, and I mean that sincerely. My love for pivot tables is pure as the driven snow. Anyway, I’m not here to defend my Excel+SQL abomination, I’m here to bring the thunder.

### Enter the /netonly switch

Using runas /netonly allows you to run your app locally as you (in my case, the user named “P”), while authenticating over a network with another user. It’s like a rare kind of magic, like a hornless unicorn.

Also like rare magic, I have no idea how runas /netonly works. There’s probably somebody who knows how it works (someone who has gazed into Win32, and Win32 has gazed back into them), but not me. It’s good enough for me to know that runas works…somehow.

Let me try to break down what just happened in the screenshot above (and note the differences with between vanilla runas and runas /netonly):

1. I ran PowerShell as myself (username “P”). This is evidenced when I interrogate the USERNAME environment variable.
2. From PowerShell, I perform a runas /netonly cmd.exe. This launches the cmd.exe shell.
3. From the cmd.exe shell, I interrogate the USERNAME environment variable. The impersonated shell is still running as “P”. However, were I to authenticate with resources on another domain, Windows would send the credentials for “OnTheINTERNET\NobodyKnowsYoureADOG”.
1. This is the point where I should try to prove that, as far as authenticating over the network, your program behaves as if it’s the impersonated user. Unfortunately I just tried to connect to CodePlex’s TFS as my example, and the work involved connecting to CodePlex via TFS depressed me, so, I won’t be attempting this today. Just try out one of my sample scripts for yourself; it will take all of 10 seconds to verify. Side note: THANK YOU, CodePlex team, for first funding SVNBridge, THEN including direct SVN support, then providing direct Mercurial support.

#### Bringing the thunder: SQL 2005 management studio

REM the following script assumes a 64-bit system,
REM and assumes you installed SQL 2005 in the default folder
REM change the parts below in RED
runas /netonly /user:REALDOMAIN\YOURDOMAINUSERNAME "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\SqlWb.exe"

#### Bringing the thunder: SQL 2008 (eight) management studio

REM the following script assumes a 64-bit system,
REM and assumes you installed SQL 2008 in the default folder
REM change the parts below in RED
runas /netonly /user:REALDOMAIN\YOURDOMAINUSERNAME "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\ssms.exe"

#### Bringing the thunder: Excel 2007

REM the following script assumes a 64-bit system,
REM and assumes you installed Office in the default folder
REM change the parts below in RED
runas /netonly /user:REALDOMAIN\YOURDOMAINUSERNAME "C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE"

### Still not perfect

There are a few scenarios where this still doesn’t work:

• TFS command-line client running from inside a cmd.exe prompt. To be fair to the TFS command-line client, it goes out of its way to let you type in credentials at runtime.
• Remote debugging off-domain in Visual Studio is still a challenge. I just tried to set it up on my laptop this last week, and failed. Note remote debugging requires some firewall tweaking as well, so maybe this is a PEBKAC-type problem and not a runas /netonly problem.

### You’re welcome

The pattern is simple: give runas your name, full path to the exe, and type in your password when prompted. If you have NEED to run the command frequently, create a batch file and quickly make a SlickRun MagicWord.

Or, let’s be honest, just drag a shortcut of your newly-created batch file to the Windows 7 start menu and be done with it. Searching for items in the Start Menu is almost as good as SlickRun—a good enough experience such that program launchers aren’t necessary anymore.

### Credit where due…thanks, TWITTER! (and I guess, Ryan)

Categories: Awesomeness
Technorati:
Wednesday, February 02, 2011 6:00:00 AM UTC       |  Comments [0]  |  Trackback
Monday, January 17, 2011 4:00:00 PM UTC

Welcome to 2011. It smells terrific here!

### The problem

You may not know it, but you have a problem. You’re using the standard Windows command shell. This is a problem.

The problems are manyfold and boring, so I’ll briefly summarize:

• Cutting and pasting is a problem.
• The cmd shell’s default history is 100 lines. This is a problem.
• DOS’s autocomplete featureset predates the word intellisense. It’s bad.
• DOS hates double-quotes. A lot.
• DOS also hates the less-than/greater-than characters. Try this on: runas /user:PC\windersUser /password:”I believe in using long passphrases and good security etc and so forth so I’ll throw in some special characters, like double-quotes (“) and a bunch of other random stuff: <>file1"
• I waxed a little eloquent on the point above, and could go into futher boring detail, but just take my word for it. DOS doesn’t do windows, and DOS doesn’t escape special characters. Ever.

### The solution

The solution is to launch PowerShell. For the privileged few running Windows 7, it comes pre-installed. For the rest of us, minus the crazy dude still running Windows 2000 for security/paranoia reasons, PowerShell can be downloaded.

A small aside: the Start menu in Windows 7 is excellent. I don’t maintain icons on my desktop, the quick launch, pin programs to the taskbar, clicking through the Start Menu. I just tap the Windows key and tap in a few letters. For PowerShell, WINDOWS, “p” “o” “w”, then ENTER. That’s it.

Ahem. Onward.

### So now I’m running PowerShell…now what?

You get:

1. Better autocomplete, especially with file and pathnames.
2. Better default settings, including an output history that stores $HUGE_NUMBER lines. 3. A shell that doesn’t hate spaces and double-quotes, and by extension, you. 4. Little neat things, like dynamic vertical and horizontal resizing, and… 5. Easy cutting and pasting. Allow me to give a full tutorial below: ### Cutting to the clipboard ### Pasting to the PowerShell host ### It’s The Little Things Tonight I’m working through the Ruby koans. I know, who cares. But I’m here to tell you that, though there’s not all that much difference tonight between using the cmd shell and the PowerShell host, there’s a few little things that add up. Here’s a little thing: just now, I made this simple, tiny improvement that combined the cls command and the “run the koans” command into one line, which made iterating through the Koans that much more easier. Re-running the koans is now as easy as UPARROW, ENTER: ### Footnotes I haven’t bothered trying Console2 yet. I know cmd.exe is technically not the DOS shell. Technically it still has all the interpreter problems DOS 3.3 had, so I’m calling it DOS, plus the full name for the built-in shell is probably something like Microsoft Windows Command Shell 2011 for the Microsoft Windows 7 Operating System Administration Pack R2 (KB994112). I just made that up, but if you think the name is a total exaggeration, go research why we call “VSTO” by a four-letter acronym. ### Running cmd.exe inside PowerShell (strictly for the lazy) If you love everything I’ve said, but can’t summon the mental energy to learn remedial PowerShell, that’s okay. You can still gain some benefit from the PowerShell host running the DOS command shell! Just type “cmd<ENTER>” at the prompt, to roll into the land of LEGACY. Categories: Awesomeness | PowerShell Technorati: | Monday, January 17, 2011 4:00:00 PM UTC | Comments [3] | Trackback Wednesday, August 04, 2010 5:00:00 AM UTC Sometimes you keep things “in the family.” Other times, instead of crafting and sending an email to your teammates with the assumption no one read it, you blog whatever you were going to write in the email. Then, because (let’s be honest) none of your teammates read your blog, you cut-and-paste the content from your blog post and send the email anyway. They’re not reading your email either way, but at least you can now cite yourself as an authority, now that you’ve blogged about the topic. Everybody knows blogging is a big deal. This blog post is the latter case. In case you weren’t taking careful notes from the above paragraph, by “the latter” I mean “this post was inspired by a work argument, and I promise not to sound too vindictive or passive/agressivey while presenting my case.” Enjoy! From Ayende (disclaimer: he wrote a book about DSLs): Similar posts I came across recently: • Configuration ‘come to Jesus’ – David Laribee talks about the evolution of developers away from XML configuration. In the comments, he gets to the heart of the matter (so, read the comments). • Okay, apparently I didn’t come across anything else recently. ### A further (lazy) case for hardcoding This is by no means an exhaustive linkblog post; I just (lazily) skimmed the surface. If you want to look at examples of people moving away from XML configuration, look at the Castle/NHibernate stack. (Windsor XML configuration and .hbm schemas are dying, being replaced with, dare I say it, “fluent interfaces”. The point isn’t the fluent part, the point is they’re code-based.) Witness the ascent of psake and rake in .NET for build scripts. Witness MEF and the scenarios it enables (we probably won’t need any of them, honestly). Witness FubuMVC and it’s nigh-empty web.config. I’d prefer to discuss this with a concrete example, but, alas, I’m way too lazy. Let’s just try doing this without expending any effort: If I can summarize, XML (and by extension MSBuild and NAnt) can die in flames, and I hope it does so sooner rather than later. The love of money XML configuration is the root of all evil. The end. Categories: .NET Technorati: Wednesday, August 04, 2010 5:00:00 AM UTC | Comments [0] | Trackback Wednesday, August 04, 2010 4:40:55 AM UTC Updated 2010-08-04: I’ve fixed some of my word awkwardness, and added several TODOs at the bottom of the list. For your enjoyment! The problem with coding dojos is that no one else seems to want to run them. I’ve long desired to participate in a coding dojo where we work through the object calisthenics rules. We’ve attempted it as a group once before, but the results were poor. So poor, in fact, that when reminded of it recently, one participant gave such a look of horror and shouted “oh no!” It was as if he’d seen a chupacabra and was looking to escape out the window. But it wasn’t a chupacabra—he’s just been through our group object calisthenics dojo and suffered a flashback. No worries Michael, you’ll remain anonymous. Despite distasteful memories and general horribleness surrounding everything I knew about object calisthenics, I plodded onward. Slowly. And, some year or so later, here I am, blogging about it, and here you are, skimming my blog post, reading every fifth sentence or so. I don’t blame you. I’m working through object calisthenics because I want to try out the rules, and because I’m preparing for the upcoming public Houston TechFest dojo. I plan to use examples from this codebase to explain each of the rules, so it’s important to get it right. You don’t have to agonize over every tiny detail as much as me. Agonizing over code is not a rule of object calisthenics; for this you can be thankful. The problem with coding dojos is that no one else seems to want to run them. So I’ll do the best I can come October 9th. I hope I’m prepared. ### A note about object calisthenics I wrote this project following the rules (and over-arching goals) outlined in the original object calisthenics essay [Warning: RTF; will blow your mind]. One major problem with choosing KataPotter to solve, is that I solved the problem without creating many collaborating objects. The essay says to “spend 20 hours and 1000 lines writing code that conforms 100% to these rules.” KataPotter is too small. If I get in a blogging frenzy, I’ll blog in more detail about my experiences, and I’ll go into depth into each rule and how I learned something from it. But, definitely not right now. ### I’ve uploaded it to GitHub I’ll cut to the chase: http://github.com/pseale/KataPotterObjectCalisthenicsthis is the 90% finished product. I’ll list the remaining effort below. ### Now for the remaining 90% Obvious things I’ve missed? Let me know. I don’t know what I don’t know. These are the rumsfeldian unknown unknowns. Help me make them known unknowns, or known knowns, or known known knowns. Whatever they are, let me know. Allow my custom collections to implement IEnumerable<T>; remove now-extraneous methods. Originally I decided IEnumerable<T> would be “cheating”, but you know what? It’s a collection. It’s not cheating. I have some dumb code in there because I didn’t allow myself to deal with the collection as a collection. Is this method signature a violation? public Money Add(decimal amount); Notice anything? It (potentially) violates rule #3 Wrap all primitives and strings. The decimal is a primitive, and thus forbidden. I figure though, how else am I going to add two Money objects to each other, if one of them can’t tell the other Money how much it has? That’s just dumb. Too much time already has been wasted thinking about this, and, seriously, how else are you going to add two objects together? Break up BookCollection. It’s doing too much. BookCollection should be about adding, removing, and clearing books; it should be a first-class collection and nothing more. All non-collection behavior should be broken into another class. Perhaps several classes, especially isolating anything related to those impenetrable LINQ queries. Rule #4 says that we should have first-class collections. Rule #7 says to Keep all entities small (50 lines or less). Break it up. Update: I should have been clued in by the fact that I have no less than 4 test files for this class, split by behavior. Consider me thoroughly clued. Write a console app that works. Right now Program.cs sits alone, forlorn. It needs to a) get a list of books to calculate, b) run the calculator, c) emit the result. It’s not hard, I’ve simply neglected it. Also, for the record, I don’t have to adhere to any rule craziness when writing the console app. Strategy pattern abuse? Investigate. Investigate the *BookSetCostCalculator classes and figure out what the author meant by Rule #2, Don’t use the ELSE keyword. Side note: remember, his rule predated the anti-if campaign. I know that I would not allow such abomination to live in real code I’d check in. I don’t like anything about the calculators. If there’s any way you can see to either a) expand the scope of this Strategy so that it’s used more than one place, and thus justify its existence, or b) at least find better names, let me know. Combine BookSet with the *BookSetCostCalculators somehow? For your sake, I won’t even attempt to explain my early thoughts. Null object abuse? Investigate ZeroMoney. Again we’re hanging with our good buddy Rule #2, Don’t use the ELSE keyword. This time, the essay encourages us to try out using the null object pattern. I think I’m abusing the pattern with my ZeroMoney. I don’t think that’s what null objects are for. Again, the simplicity of the problem has snagged us, and I’ve tried to shoehorn in a null object where I could have done without. A second issue I have with the null object pattern is: I don’t ever return null anyway, at least not inside code that I control (both the caller and the called). As they say, what’s up with that? Namespaces Rule #7 is Keep all entities small. That means ten (10) classes per namespace. My KataPotter solution is small enough that it almost fits in a single namespace, but I should adhere to the spirit of the rule and add some folders/namespaces. Maybe something will emerge. Update: I still hate that .NET makes it ugly/discouraging for me to name both a namespace, and a class in that namespace the same thing. Take KataPotter.Core.Book (the namespace) and its class Book. Every time I want to refer to the class Book, I have to either put Book.Book or (what I consider worse) use namespace aliases. What’s the deal with some of those tests? I don’t know why I was so nervous at the time about .Clone() not working, but I was. So sue me. I think it had something to do with taking baby steps and trying to make .Clone() work while pretending IEnumerable<T> was forbidden. But still, delete some of those tests. And call out the “acceptance tests” for what they are. Second note: move the dumb one-line SetUp() code into each test. DRY or no, the SetUp() code is hurting readability. Third note: remove tests that test non-production code. E.g. money.Add() tests cover null values…why? Fix “the .ToString()” cheating problem. This will take a little explanation. The problem with Rule #9 of object calisthenics (no getters, setters, or properties) is that eventually something outside your code will want to interact with something adhering to the rules of object calisthenics, without breaking rule #3! Okay. Okay, let’s do this by example. Let’s say you’re logging in somewhere. There’s a method called public LoginResult Login(Username username, Password password). Now, how do you know if the login was successful? A bool property? No! A method called GetLoginSuccess()? No! You can’t even be clever and put a method on LoginResult called WasSuccessful()—because what would you return? A bool? That would almost make sense, except Rule #3 is “Wrap all primitives and strings.” If you try to do something clever like WasSuccessful(), you’ll have to return another custom object that wraps a bool, and, now you’re back facing the same problem with which you started! It’s a conundrum. I got tired of thinking about it, and I figured, “Hey, I’m writing a console app, at least in theory. I might as well implement ToString() and use it as my dead-simple way to smuggle data out of these objects!” And I did. If you look at the tests, you’ll see that all of them compare strings. And in their defense, they work, and others have resorted to ToString() to test. If you bend your thinking a little, and pretend ToString() is called ProjectOntoViewObject() that just so happens to return a string every time, maybe that would soothe your mind a tad? Does it? It still feels like cheating. As I’m supposed to adhere to the spirit of Rule #3 (Wrap all primitives and strings) but as I’m also supposed to be able to write code that can be observed (thus saving us from the paradox of trees falling in the forest), I’m permitted to break the rules on the edges of the API I’m building. In my case, in this KataPotter solution, this means Book, Money, BookCollection, and RemoveSetResult all have ToString() methods. These are the classes that either a) are at an API boundary, or b) I needed to unit test badly. There are known alternatives to the “.ToString()” problem, the most popular one for testing being, implement .Equals(). I didn’t like the idea, partially because we tried that at our group dojo with horrible, horrible results, and partially because you still can’t observe the objects in question, though you can throw them at similar objects in a supercollider at very high speeds and observe what happens. It seems like every test becomes a heavy exercise in mocking. I’ll stew on this one some more. I need to get rid of the cheating, particularly on internal classes where I can use mocks and expectations to figure out what’s going on. Maybe ToString() is legitimate enough on the boundary objects, and may be permitted there. Will continue to stew and advise. Updated: What is this property doing in there?!? Property? Rule 9? CHEATER! I have no excuse. My Book objects have a property getter named “Title”, and other objects make use of the Title directly. Oops! Updated: What is “bool IsEmpty()” doing in there?!? bool? Rule 3? CHEATER! Guilty again. BookCollection has declared a “public bool IsEmpty()”, which is wrapped by an identical method on another class, which is then used as part of a decision-making process. If I’m correct, it looks like I’m going to have to introduce yet another (abuse of the) strategy pattern to eliminate the bool returns. Updated: what’s all this dead code doing in there? As I happily refactor away, I’m making major changes to the internal organization. There are casualties. Were I a careful C# citizen, I would use the internal keyword instead of public, and R# (and for those of you without, FxCop as well) would be able to easily determine which internal methods and classes are never used, and would be able to highlight them for me. Too bad I’m too lazy to change everything from public to internal. Thankfully, R# also includes solution-wide analysis, which lets me know which public methods and classes are unused as well. So, this is just a reminder to myself to make sure that solution-wide analysis is turned on, so I don’t miss anything obvious. Categories: .NET | Object Calisthenics Technorati: | Wednesday, August 04, 2010 4:40:55 AM UTC | Comments [0] | Trackback Tuesday, August 03, 2010 5:00:00 AM UTC The Houston TechFest is coming! #HTF2010:Houston TechFest – October 9th, 2010, @UH I’m particularly interested in the following sessions: • CODING DOJO (extended session—bleeds into lunchtime) – emphasis on bleed. I have only one question: “WHO’S THE CHUMP RUNNING THIS DOJO?” I really hope the speaker comes prepared. • The Keynote – Venkat is an excellent speaker. Assuming the projector in the main room works at all, … well, maybe even without the main room projector… from the title it sounds like this is some kind of call to arms. Sweet. • Peer code review – an Agile process – assuming this talk is based on first-hand experience, this could be the most useful session in the entire TechFest. Code review has been, hmm, how to say, an underserved need thus far in my career, and I wouldn’t mind submitting myself to code reviews. • Workflow systems – myths – from a Microsoft DE. This could be dynamite. • Pair programming – Part of the Claudio-fest AKA .NET 1 track. I’m not sure what’s going to happen here, but I’ll give my stamp of approval sight/description unseen. • Two excellent, globally-applicable sessions disguised as SharePoint content: • Advanced object-oriented programming – I’m curious to see how this session goes. At some point, the concepts become sufficiently advanced such that the best way to explain them is to show code. However it’s done, the content looks interesting. • Agile Adoption: curing the disease – conflicts with my session, otherwise I’m there. Incidentally, I think that the lack of Agile-y coding skills (or as they’re sometimes called, “agile engineering techniques”) are a huge barrier to Agile adoption. Just that, and human nature. • Zen coding – a more philosophical session. • The Claudio-fest AKA .NET 1 track – I won’t be attending because I’ve seen these sessions at some time or other: • Design patterns – Claudio’s session here is code-heavy, in the best way. He presents each design pattern by example, writing the code as you go, so you can follow along. Highly recommended. • Tips and tricks to boost productivity – this session is where I first learned about SlickRun. Claudio will introduce dozens of little, helpful tools in this session—you’ll pick up something from this session. These sessions tickle my fancy: I’m not particularly interested in the introduction to * sessions, Azure (or anything Cloud), Windows Phone 7, the SharePoint technical sessions, Java, or anything SQL. Basically, any technical content I can’t use within the next three months is uninteresting to me. But that’s not the point. The point isn’t that I’m uninterested in attending most of the sessions; the point is that I’ve found something (in most cases, several somethings) in every time slot I do want to attend. The Houston TechFest will have something for everybody. Even me. ### Full Disclosure I am bound to disclose the fact that if you attend the Houston TechFest, you will have to give up the following: Yet again, the Houston TechFest has chosen to tempt fate and has scheduled itself on the day of the largest UH football game of the year—Miss. St. is coming to town. Last year when Texas Tech took the field at Robertson Stadium just hours after the TechFest’s closing session, things ended badly for them. 29-28 badly! Categories: .NET Technorati: Tuesday, August 03, 2010 5:00:00 AM UTC | Comments [0] | Trackback Saturday, July 17, 2010 7:17:01 PM UTC In preparation for the upcoming session at the Houston TechFest (October 9th, 2010, UH campus), I’m doing “internet research” AKA browsing around a lot. I’m collecting here a list of everything I could find on the topic. Be warned, this will be exhaustive (and thus, exhausting to read). Apologies in advance. The original essay • Object Calisthenics [warning: RTF document] by Jeff Bay – this also appears as a chapter in The Thoughtworks Anthology. It’s well-written, and if you’re going to bother trying out object calisthenics, please read the original essay. The most important thing to learn is not the rules themselves, but the reasons the rules exist, and thus, what you’re supposed to take away from the entire experience. Retrospectives from people who have attempted object calisthenics • Object calisthenics: first thoughts by Mark Needham. Notable takeaways: • he was “surprised how difficult the problem was to solve using the Object Calisthenics rules.” • “I noticed [after trying object calisthenics] that I was always on the lookout for ways to ensure that we didn't expose any state, so it's had a slight influence on my approach already.” • Unit testing is hard: • Mark’s group implemented .Equals() and .ToHashCode() for the sole purpose of being able to unit test while adhering to the rules of object calisthenics. (It is generally frowned upon to implement production code for the sole purpose of building tests.) • Another group used baby steps TDD and triangulation to build unit test. While Mark (in the blog post) was supportive of this approach, I had less than stellar results trying this out in our coding dojo last year. • They didn’t finish solving the problem. For those of you surprised by this, trust me: if anyone ever finishes a problem in a coding dojo environment, it’s something of a miracle. So, with this context, you may read the sentence as “Today, no miracle occurred; we didn’t finish the problem during the coding dojo.” • Notes from the comments: • From Kris: Possibly encourage people solve a small part of the problem by breaking the rules, then, slowly refactor their code to “make the rules pass” in a manner conceptually similar to TDD’s red/green/refactor. • First Sydney Coding Dojo (NOTE: this is another perspective on the same dojo mentioned above by Mark Needham) • Coding dojos as a means of idea exchange: “Apart from being an amusing experience, it was quite interesting to see the different approaches that people take to solve the same problem, - the design, the way they write tests, the code style, pretty cool.” • Also interesting to note, the author suggested improvements that would “improve productivity.” Coding dojo productivity seems to ALWAYS be abysmal. • Object calisthenics: by example; inspected – quotes: • “…his techniques included the use of the Visitor design pattern, which wasn’t the author’s first choice beforehands. Test Driven Development alone wouldn’t have led to this solution…” • “The first observation was that the rules follow a dramatic composition that orders them from “most obvious and immediate code improvement” to “hardest to achieve code improvement” and in the same order from “easiest to acknowledge” to “most controversial”. At the end of the list, the audience rioted most of the time. But if you reject the last few rules, you’ve silently agreed to the first ones, the ones with the greatest potential for immediate improvement.” • “It’s a learning catalyst for those of us that aren’t born as programming super-heros. To speak in terms Kent Beck coined: Object Calisthenics provide some handy practices that might eventually lead to a better understanding of their underlying principles. Even beginners can follow the practices and review their code on compliance. When they fully get to know the principles (like Law Of Demeter, for example), they are already halfway there.” • This is yet another example of “coding dojos are a safe learning environment”: “At last, Object Calisthenics, if performed as a group exercise, can be a team solder. You can rant over code together without regrets – the rules were made elsewhere. And you can discuss different solutions without feeling pointless – fulfilling the rules is the common goal for a short time.” • Alt.Net Stockholm coding dojo – it appears that they didn’t finish the problem, no miracle occurred at this dojo either. The only other takeaway I have from this is that nobody wants to stick to the object calisthenics rules. My pet theory is that people try to avoid pain, and these rules cause a lot of thinking pain. • Trying Coding dojo, kata and Extreme OOP - “Second - the rules are very hard to follow… Very hard. We didn’t get quite there I felt.” • “Being Cellfish” – a Microsoft employee’s detailed experiences with object calisthenics: • Team Coding Dojo 5 - • On refactoring as a tool of emergent design: “This time we had a lot of design discussions and we had to force our selfs to just do some refactoring and see where it took us. I think it was great to see how we refactored and created new classes just to later refactor these classes to nothing and removing them. It was a great experience in how refactoring in steps reveals the design for you. We also had the full test suite save us a bunch of times from stupid bugs which is also nice.” • On lack of productivity: “But refactoring existing code to follow the object calisthenics rules is very hard and takes time.” • Object calisthenics: first contact • On small classes: “I also learned that classes that I felt were really small and doing only one thing actually could be split up when I had to in order to conform to the rules. Reminds me of when people thought atoms were the smallest building blocks of the universe and then it turned out to be something smaller…” • “So all in all I think doing a coding Kata while applying the object calisthenics rules will improve my ability to write object oriented code” Explanations of object calisthenics Problem sets/source code of object calisthenics attempts Reviews from people who have read about (have not tried) object calisthenics • Object Calisthenics - “Jeff explains in a great way a few principles and challenges the reader to try them out in a rigorous way, just to see how it works out. This is a great way to present it, its not saying “I know the right way and you must follow the rules”, its suggesting that you should give it a chance and you might begin to see some rewards, or “Try it, you might like it”.” • Object Calisthenics, Part 2 – the author discusses how adding small methods eliminates what people sometimes call “micro duplication”, and discusses the purpose of rule #3 (No static methods other than factory methods) in further detail. • If this is object calisthenics, I think I’ll stay on the couch – from the comments: “…but if [object calisthenics is] an exercise, then you need to make sure that it’s working the right muscles, and not hurting your overall form. My belief is that these exercises are not working the right muscles.” My counter-argument to the author is: dude, you come from SmallTalk land. You have mastered the mama bear, (just-right!) approach to object-oriented programming. Object calisthenics was written by a Java programmer, for the (presumably) Java audience. Think of object calisthenics as the papa bear object-oriented ruleset (too hot!) to counteract the standard baby bear procedural-style programming practice (too cold!) . Once the baby bear programmers have tried the papa bear’s porridge, they’ll…well…I sure hope they learn something. Anyway, this article has good points • OO’s short classes and small methods – while the author endorses object calisthenics, I’m hesitant to quote him on anything, as he hasn’t tried them out. In any event, this article was linked from proggit and received lots of comments with a mix between those expressing dubiousness, comments defending the “just try it” approach, and comments completely misrepresenting the object calisthenics rules. The reddit comment thread is similar. Takeaway for me is, first, emphasize that the rules make sense, and second, have a paper reference explaining the rules in further detail. There will be misunderstanding, guaranteed. JACKPOT! Blog post citing research from SCIENCE! SCIENCE, whereupon we can base our opinions, as opposed to basing our opinions on other uninformed blog posts! ggggggggggggggggggggggg-yes! • Are short methods actually worse? – the author reviews the most commonly cited research on method length (make sure to read the update for the updated conclusion). The author also (separately, not influenced by the aforementioned research) introduces a concept I can agree with: “By making your methods shorter, you’re just trading one kind of complexity for another.” This I think is the #1 issue keeping people from adopting object-oriented programming and the “explosion of objects”—they can no longer find their code once it’s split between five objects, instead of the one object that did EVERYTHING. Related links • Ravioli Code (from the original gangster C2 wiki) – spaghetti is what happens when you have a procedural mess. Ravioli is what happens when you have an object-oriented mess. In defense of XP, (next link follows)… • XP Practices diagram, from What is XP – “Simple Design” is a core element of XP. “[choosing the appropriate] Metaphor” is also important to keep your code simple. Not mentioned in the XP diagram, but implied is the concept of… • You Ain’t Gonna Need It (YAGNI) (from the original gangster C2 wiki) – don’t add anything to your code for “flexibility”, “modularity”, “just in case,” “something I will need later.” YOU, your SOURCE CODE REPOSITORY, and your PROGRAM REQUIREMENTS are the most flexible pieces. When you need something in your code later, add it later, at the moment you need it. Categories: .NET | Object Calisthenics Technorati: | Saturday, July 17, 2010 7:17:01 PM UTC | Comments [0] | Trackback Monday, June 14, 2010 3:47:25 PM UTC This is a placeholder in case my session is accepted. I will post the details of the problem, object calisthenics rules, the philosophy behind coding dojos, and GitHub project details, all as needed. For the rest of you reading this, if you're interested in helping out (again assuming the session is accepted), find me and I will gladly accept help (especially during the session!). I would also like to do a public "test run" of the object calisthenics rules before inflicting it upon the TechFest audience. Let me know if you're interested. Categories: .NET Technorati: Monday, June 14, 2010 3:47:25 PM UTC | Comments [1] | Trackback Monday, June 07, 2010 5:52:18 AM UTC Recently I’ve been working with WPF on my first medium-or-large development project. Am I allowed to acknowledge that I don’t have seven thousand-plus (7000+) of these big apps already under my belt, career-wise? I guess i just did. Anyway, it’s been fascinating. All these principles that you read about and that sound nice, but aren’t causing you pain in your 400LOC web part project, become ugly quickly on a large, connected codebase. I’ve now had the time to experience the following concepts personally. Notably: • The DRY principle, and how duplication in your code creates bugs. I think I’ve seen someone with more authority say this elsewhere, but, let me pretend to be the first to say it: if you only focus on one thing to improve in your codebase, start with removing duplication. Removing duplication has been eye-opening for me. Once you remove all the duplication, then you can worry about restructuring the code, but not until you’ve removed the duplication. • Allowing broken windows to remain broken leads to a downward spiral of code quality. This is one of the opening sections of the Pragmatic Programmer book. • The need for unit testing to ensure specific behaviors work as designed, and to help you express intent, and to ensure that the code remains working down the line, when everyone has forgotten what the code is supposed to do. Tests also break a lot (we’re working on making them more unit-ish). • Integration testing and how, in conjunction with unit tests, it helps ensure your system works, and allows large-scale restructurings (and allows dangerous merges using the team suite source control product from the unnamed vendor). • The importance of continuous integration (and as much as possible, continuous deployment) as a means of chaos control on a team project. “Who broke the build” actually matters on a team. If you ask “who broke the build” on your one-man project, you get funny looks because you’re mumbling to yourself again. • SOLID (particularly the Liskov substitution principle) and how it leads to more readable/easily-digestible/composable code. • Object modeling and carefully selecting object responsibilities (which ties into SOLID) and how drastically a seemingly-small change in the responsibilities of objects drastically changes the way your code works and looks. I’m still struggling with choosing the right seams between object responsibilities. • BDD or STDD or ATDD or whatever you’re calling it – honestly we’re not doing anything like this on our project, but I’m feeling the pain nonetheless. Part of the problem is that Agile is hard to do 100% well, and part of doing a good job is putting away the pride and not doing Agile 100% well in order to get the job done. I may sound like a luddite here, and I’m sorry, but the truth remains: if we have a skill gap between the ideal Agile team, and us, and we need to deliver, we work at our current skill level. No regrets. …Back to BDD. Another part of the problem is that BDD (and unit testing) get into this fuzzy area where it’s difficult for the unskilled practitioner (read: me) to follow a rigid set of rules that will get me to the land of BDD goodness, where our customers tell us a story of just the right size with just the right level of detail, and we take those very same words and create a behavior test, and that behavior test is written with just the write level of granularity, and it’s all effortless and the tooling is excellent and we review all the behaviors as a team, and lo and behold! The customer spots a design bug just by reading the names of our behavior tests, and we’ve saved 5000 man hours because we caught it before it was implemented. I’m still not there. And neither is the team of which I’m a member. • Technologies are still frustrating, but not on the same level. I’ve had to implement one (1) ugly workaround involving a WPF data grid, and while it was unsavory, at least it wasn’t a complete brick wall. It’s kind of funny how the grizzled veteran SP developer within me jumps to create an ugly workaround that gets the job done, instantly, without a moment of hesitation or regret. My lead asked me, “are you sure there’s not a way to fix the problem properly?” Because most people are trained to solve the problem, the right way. Nope, I’ll just do an ugly workaround and move on. • LightSpeed ORM works, but I have two major complaints: • One: we are forced to inherit from a base class. This means that we have to carry the LightSpeed DLL into every project in which our model objects are used (hint: all of them). Also, arising from LightSpeed’s ID column magic in the base class, is the problem that you can’t set an entity’s ID field for unit tests without even deeper magic. It’s an ID column, I want to set it so I can test equality, without ugly hacks or counterspells to counter the LightSpeed magic. • Two: (and perhaps more importantly) using LightSpeed means that we can’t use interfaces in our model. [removed 300 word attempt to summarize the problem. Take my word for it, we can’t do it easily, and we want to. Included in the 300 words I deleted is the phrase “by jove,” which I think needs to get more playtime. -ed] This is probably also why the POCO (plain old C# objects) crowd is so militant about POCO itself—this whole business about using a base class and relying on attributes to do relationship mapping gets ugly in a hurry. Composability again—it’s important. • Three: some of the features are buggy. it’s hard to say how much of the bugginess is our misuse (PEBKAC error) or our unique scenario, but bugs kill productivity and sap energy. I wish we had NHibernate instead of this paid product, and would not be dismayed if we moved to the Entity Framework v37 or whatever number is assigned for this year’s release. • MVVM is much better than not having MVVM. I’ve seen code that uses the codebehind approach when an MVVM approach would have worked, and the MVVM code is SO MUCH EASIER (!!!)!!)!(!)!)!(!(!) to figure out and modify. Craziness. Small note: once I’m beyond the remedial MVVM stage, I need to venture out and see what other presentation models exist. Later. Not now. • I’m sure we could be doing MVVM better, but it’s a small fry issue in the grand scheme of things. It’s not the root cause of any of our pain, though I’m nervous about holding child viewmodels, and then children of the child viewmodels, and so on. Also don’t ask us whether we’re viewmodel-first or view-first, we’re more a federation of fiercely independent states, like the united federation of planets, only with views and viewmodels. We are unique and we make our own decisions, and you can make your own decisions. Vulcans are strictly ViewModel-first. • The Prism event aggregator is just beautiful. Maybe I’m biased, because I’m comparing it to not using an event aggregator at all, and attempting to cobble together an event notification solution using property change notifications on child and (great?)-grandchild-viewmodels. I assure you the previous sentence gives me NIGHT TERRORS just thinking about it. Trust me on this. It’s like trying to follow code written by Macguyver—clever, and gets the work done, but cobbled together with bailing wire, a bar of soap, and a lit cigar. I should make a law out of this: You Don’t Want To Maintain Macguyver’s Codebase. Peter’s Law #59 ©2010, All Rights Reserved, ®, TM, Patent Pending. • Unity is a problem, because it doesn’t allow us to pass parameters into the constructor at runtime (think data) while injecting the rest of the dependencies (think services). I’m somewhat new to the fanciness of IoC tools, but I’m pretty sure on this—the other tools allow you to do what I’m asking. Unity DOES allow us to work around the problem but…without going into a long boring exposition on code via a blog post…the workaround is not ideal. It’s hard to tell at this point whether we’re misusing Unity or if Unity is limiting us. I’m going to say both…but what happens when we stop misusing the tool and are at that point restricted only by its limitations? Well, we’ll get there when we get there. We’re not there yet. • Gathering requirements is still a problem, and it turns out business analysis skills are still valuable. The sky also remains blue. • I’m not a fan of developing for the third-party grid we use, mostly because it reminds me of SharePoint, in how if you stretch the grid’s functionality in ways it was not designed to stretch, you end up in a world of hurt. And, it’s not a true WPF citizen, which affects a lot of things. Grids are the root of all evil. Strike that, the LOVE OF grids is the root of all evil. • We desperately need end-to-end testing for WPF. And by we, I mean me. I’ve heard Project White works but is slow, but I haven’t heard any mention of anyone using it. Allegedly VS2010 also has UI automation facilities. Hopefully there’s a solution for this soon, and specifically, hopefully someone else invests their time, not mine, figuring all this out. • I finally shelled out the money for R#. For code construction, you can probably do without R# and just use the built-in Visual Studio tooling (which I did for the longest time). But, on our medium-ish sized project, the navigation features alone are worth the price. I’ve also noticed that I seem to be the only person in my room that uses both a) the navigation menu (CTRL+SHIFT+G IntelliJ bindings) and b) the refactor menu (CTRL+SHIFT+R). • I’m also frustrated by how R#’s intellisense gets in the way of typing more often that vanilla Visual Studio. When attempting to close R#’s intellisense, If in doubt, hit ESC five or seven times. Then wait a few seconds, then hit it another sixteen or seventeen times, to be sure. • Third crucial R# keystroke: SHIFT+ALT+L – works almost like CTRL+ALT+L, but better. Having just typed these words, I know how dumb the last sentence reads, and I’m sorry, but there’s no improving it. Either you’re feeling the pain of CTRL+SHIFT+L, and discovering SHIFT+ALT+L fills you with great joy…or you have no idea what I’m talking about. • R# 5.0 feature – update namespaces to match folders, or update folders to match namespaces – BOTH OPERATIONS WORK IN BULK! YESSSSSSSSSSSSSssssssssssssssssssss. Also, related, you can move entire folders at once. That scary namespace change is no longer scary. ### Takeaways I don’t know if any of you made it all the way through the long list, but even for those of you who got a flavor of what the updates are—for the most part these things that have held my attention for the last several months tend toward fundamental, classic issues. This is my first time to blog directly about work, and I’m trying to toe the line—I don’t want to turn this into a “winning a work argument via my blog” blog entry. You know—when you argue about something dumb at work, summarily lose the argument, then later, still fuming, blog about how you would totally make Data human instead of give Geordi back his eyesight, totally, and how any dissenting opinions are wrong and weak. Then shift backwards in your beanbag, in a sort of smug, self-satisfied way. You’ve won! Sweet victory. Oh yes, sweet sweet internet victory. Previously I’ve been the lone ranger, able to resolve arguments with myself peacably and without a public stir. But now I’m on a team of lone rangers…a loosely united federation of lone rangers. Or planets. The point is, there’s a bunch of us. And some of us speak fluent Klingon. I’ve got to watch myself a little more now, to make sure I’m not rehashing work arguments, or posting things that we need to “keep in the family.” Hopefully the new content (content, not the jokes) is found useful by someone besides myself. Categories: .NET Technorati: Monday, June 07, 2010 5:52:18 AM UTC | Comments [0] | Trackback Thursday, March 18, 2010 3:54:58 AM UTC Hello! I'm Peter and I'm here to present another sweet, sweet linkblog post. I've done this a few times before ([1] [2]). My goal with these linkblog posts (which are becoming a habit) is to expose you to new concepts, point you to useful resources, and wow you with a dazzling laser show. I've pulled together anything tangentially related to software development in the .NET space, salted each link with commentary, and grouped them into sections. I'm not an authority on most of the articles to which I link. Also you may be noticing that this is the “Q4 2009” edition of the linkblog, and are perhaps concerned that your blog aggregator is some 3-6 months out of date, or that you’ve somehow mistakenly traveled backwards through time. Nope. I refuse to change the post title out of principle. It’s important to stick to your principles. Community events Online video lectures, screencasts and workshops, or: Why conferences are useless as a learning vehicle - okay, admittedly I haven't made the time to watch any of these, but I think that, if I ever WERE to make the time, this is where I'd start. I'm almost making this list as a to-do for myself—hey Peter, check these out later! Also to be clear, I don't think conferences are useless, they’re just relatively useless…for learning things. The point here isn't to hate on conferences, instead it's to say hey! Here's all these conference feeds with hundreds of session videos. We're to the point where I can say "hundreds." This is new. We weren't able to say “hundreds of free conference videos” just a few years ago. • Virtual ALT.NET recordings - some dynamite in-depth sessions can be found here. • Summer of NHibernate - 14 screencasts taking you through NHibernate. This was the summer of 2008. • Norwegian Developer's Conference videos [visit links below]. Tracks include Connected Systems, Enterprise Applications, General Development, Test Driven Development, Software Engineering, Parallel Programming: • Øredev 2009 videos (incomplete; more are available each week) - videos for the Agile and Mobile (Mobile meaning Android+iPhone, not Windows Mobile) tracks. • Microsoft's Professional Developer Conference (PDC) videos - there's something close to 100 high-definition quality session videos recorded here, so even if you're uninterested in 95% of what you see, you'll still find something you DO like. • http://www.asp.net/learn/ - quality videos targeting ASP.NET. And, specifically… • the MVC storefront series of videos by Rob Conery - wherein he tackles abstract concepts by example via the storefront application. I think this is a great approach to learning; it's mixes the abstract and the practical. Okay, I'll be honest, I just watched the one video on BDD and liked it, so I’m kind of extrapolating. But I like the idea of the approach, and am interested to see more. • http://www.tekpub.com/ - Rob Conery has opened a predominantly for-pay screencast service. There are some free videos here, but the idea is that by charging for his videos he can invest more time editing and can hire better quality guests. I'll state for the record that if you have the time and need to learn about a subject he covers, this is totally worth whatever pittance you have to pay. With all that said, I haven't bought any of his screencasts yet. Yes, I'm "pulling a Morton*" again. *see above • Presentations hosted on InfoQ [right sidebar] Learning – surprisingly similar advice from different worlds: advice from a SharePoint MVP and advice from the guy who just blogged about his slide whistle. • Why I'm an obsessive learner - summarized by "Obsessive learning isn't about being a super geek; it's about discipline and investing in yourself and staying focused on the areas where you want to stay "essential". I'm not sure what to say about this except that discipline is important. Duh, I know, but just keep it in mind before you add yet another technical book to your ever-growing, never-diminishing book queue or spend all your time on easy listening, edutainment podcasts. All's well and good, and some things make learning easier than others, but for the most part there are no shortcuts—it comes down to discipline. • The secret sauce - I find it interesting that Mauro (link immediately above) and Dave Laribee are in violent agreement on the importance of discipline. "Learning and discipline are the two halves of continuous improvement. In short: live what you learn, act on your new knowledge and skill." News about news aggregators - today's trend is filtering the news aggregators themselves - aggregating the aggregators, meta-aggregating, so to speak. Which makes the following a meta-aggregators list of of sorts. This is me showing restraint. I'd make a joke here, but I'm not going to. Meta-aggregators list. We could have a discussion about the meta-aggregators list. But we won't. • Hacker Hacker News - filtering out the politics/lifestyle/news/everything-that-is-not-programming from news.ycombinator.com. Unfortunately whoever was running this, stopped. • The Left Fold - weekly digest of actual programming articles found on programming.reddit.com, with a smattering of commentary. • coding.reddit.com - with a strict policy of programming-related discussions. This community might even survive! Object-oriented development and composability • The problem with OOL is not the OO - an interesting perspective that separates OOP concepts from OO language constructs. A lot of us (me) have been exposed only to OOP concepts through Java/C#/C++ and are blind to such issues. Or maybe this guy is crazy and hallucinating, I can't tell. At times I think I should shut down the news aggregators entirely and just pretend articles like this don't exist, because reading these "everything you know about X is wrong" articles contribute greatly to code paralysis. Test-driven development, unit testing, automated testing - this category is the catch-all for posts agonizing over the nitty-gritty details of effective unit testing. Because effective unit testing isn't a skill that spontaneously appears in your brain the first time you reference NUnit.Framework.dll in Visual Studio. Anyway. As a result of my fumbling experience, I have found the following links completely and absolutely fascinating! I'm not crazy! These things are dy-no-mite, particularly because they go over arguments I've witnessed at coding dojos or arguments I've had with myself. This stuff is fascinating; I'm not crazy! • A recent conversation about MSpec practices - at some point I attempted to follow DRY in my test project, just to see what happens. MSpec supports DRY because it allows you to inherit contexts via class inheritance. But, Aaron talks here about his preference for limiting the use of contexts unless absolutely necessary. In this post his advice tends to grow closer to "classicist TDD" advice. There are nuances. Also I'll use the word "nuance" in every link to follow. Agile/Post-Agile - note I define Post-Agile as coming to grips with the reality of failed Agile, and attempting to learn from these failures. • How to piss off your pair - pair programming anti-patterns, collected on the original C2 wiki from the collective experience. If you're a cynic, you'll read this as a list of reasons why pair programming is worthless. If you're just trying to improve, this is a good (and hilarious) way to avoid problems. Example anti-pattern: Complain before your partner does something wrong. Create elaborate theories about their failings. Never forgive, never forget. Procedural graphics • Escherization - code to help you tessellate anything, including portraits of MC Escher himself (or: Escherizing Escher, or: meta-Escherizing). Hilarity, or links I couldn't fit into any other category • Abject-oriented programming - have you ever heard someone attempt to struggle through answering an interview question they have no idea how to answer? Where they keep digging themselves deeper into the hole? E.g. "Modularity: A modular program is one that is divided into separate files that share a common header comment block." Let the hilarity ensue! • Releasing psake v1.00 and psake v2.00 - PSake is the PowerShell build tool. It’s easier to learn how to do something in PowerShell than to fumble around with the NAnt and NAnt Contrib and all the XML-ness. It’s not that hard to try out PSake, so if you’re experiencing ANY pain with NAnt and MSBuild, go for it. Perhaps the best way to learn is to look at others' scripts: Categories: .NET Technorati: Thursday, March 18, 2010 3:54:58 AM UTC | Comments [0] | Trackback Wednesday, October 28, 2009 7:01:27 AM UTC I think now's a good time to close out the SharePoint tag on this blog, marking the end of SharePoint 2007-focused content. I'm creating this post as a sort of table of contents for my SharePoint content. I'll attempt to group posts into themes and then editorialize. Some of my earlier posts I'll even recant! Here goes. Things I think anyone (SharePoint community or otherwise) would find interesting or useful • Thinking Creatively - what I consider possibly my best SharePoint-related post, because it contains transferable concepts. The idea is that we as developers must go beyond our traditional code monkey role and do some critical thinking while specifying/designing solutions to problems. This is illustrated with an excellent story told during an Agile conference session. Also I recommend the linked Agile Toolkit podcast episode that inspired my post. • The SharePoint-Python connection - if anyone ever tells you "SharePoint was written in Python", I apologize. Anyway, misquoting aside, this is a fun little bit of history. • Golden Rule of Troubleshooting - I misdiagnosed an error, posted the erroneous diagnosis to my blog, and to save face hurriedly changed the contents of this post to be about troubleshooting. The golden rule of troubleshooting, for those of you too lazy to click through: beware the invisible proxy! It takes many forms! It strikes silently! Everyone will think you're crazy when you tell them network gremlins are eating your incoming packets, but leaving your outgoing packets alone! Hilarity • Welcome to SharePoint - a real-life nightmare scenario I encountered while troubleshooting a SharePoint 2003 "desktop issue." It turns out, the 15 pages of IE settings + Active Directory group policy + various Office ActiveX controls + virus scanners + IE version mix + network security appliances + Kerberos + firewalls + IE Zone settings + DNS/DHCP issues + AD replication issues + expired password issues + routing errors + spammy IE toolbars…any and all of these things, if out of whack, show the same "username/password" dialog. The post was a joke, but after troubleshooting every flavor of this problem, it gets to you a little. Anyway, welcome! • What's wrong with this picture? - mildly amusing scenario involving disaster recovery documentation. Trust me, this is as hilarious as disaster recovery documentation is going to get. • STSADM: Spot the typo - a lament for the endash. This is as much hilarity as you'll find on the topic of Word AutoCorrect. Frustration • Angry at CAML - I remember writing this post after a three days of wrestling with GetListItems, most of which was wasted learning idiosyncrasies. I then deleted most of the unhelpful angry comments, so what remains is the milder parts. This was my first "surprised by how difficult it is compared to how easy everyone makes it sound" experience. Visit for a link to the greatest Oracle DBA ever. Or visit for my graphical representation of the MSDN Rage Meter. • I'm like, angry at numbers - in burnout mode, and ranting. If there's anything to take away from this, it's a) keep a sense of perspective (i.e. this stuff isn't important), and b) don't invest time in new Microsoft frameworks as a rule. • Disposing SharePoint Objects: Survival Mode - this was a tipping point wherein I realized that no one does SharePoint development properly, even most of the MVPs. Keith eventually discovered and wrote down all of the disposal rules, and from there someone at Microsoft released SPDisposeCheck (which I believe covers most scenarios). Anyway, the subject of disposing is now a moot point—the more interesting bit is that, as of two full years after RTM, we had incorrect guidance as to how to dispose ~2MB objects on web servers typically running a maximum ~1000MB in the worker process. Sort of an eye-opener. • Beyond technical challenges - rant, wherein I close the SharePoint 2007 portion of my blog (oops—the ban lasted a full month anyway, until I couldn't hold out). There are some takeaways here, notably that everyone's struggling with SharePoint, including the MVPs and "experts." I make the statement that every person working with SharePoint should look beyond their immediate technical challenge and ask, is SharePoint the right solution? Also, I challenge the assumption that SharePoint is a good developer platform. SharePoint as an app dev platform • [also referenced in the "useful" section above] Thinking Creatively - what I consider possibly my best SharePoint-related post, because it contains transferable concepts. The idea is that we as developers must go beyond our traditional code monkey role and do some critical thinking while specifying/designing solutions to problems. This is illustrated with an excellent story told during an Agile conference session. Also I recommend the linked Agile Toolkit podcast episode that inspired my post. • Argue with your customer - I think I posted this after failing to convince my customer to go with less SharePoint customization and more out-of-the-box features. I still get a lot of pushback when I try to prevent SharePoint customizations. If there's something to take away from this, especially as a non-SharePoint developer, it's that not all features cost the same, and not all customizations cost the same. Making the relative costs clear to your customer should (should!) encourage them to avoid the more costly customizations. I'm always shocked at how someone will tell you "no, we need it to work exactly as I've told you!" and then turn around settle for a vendor product that does about half of what they want, but costs more. • 80%, then stop - wherein I tell a story about my experience with the Pareto principle as it applies to SharePoint solutions. Also: you don't have to write your apps in SharePoint! If it doesn't save time, and you don't know of any benefit you'll gain from using SharePoint, then why are you attempting to use it? • Estimating SharePoint tasks: cry for help - scary realization that I'm still unable to estimate how long something's going to take, primarily because I'm constantly blazing new (new to me?) trails in the SharePoint API, making bad assumptions about its behavior, triggering bugs, and running into unexpected limitations. Any of these things can cause multi-day delays. It does get better if you're writing a second or third app that deals with the same part of the API. Framework limitations or errors • How many is too many [SharePoint list] items? - the SharePoint whitepaper announcing the 2000 (now 3000) item limit per container was something of a blow. To say it clearly: this limitation prevents you from using OOB lists for anything with real traffic. There are over 3000 days in 10 years, so at 1 item added a day, you're running into the recommended limit. Since then I've seen some crazy errors with large lists, mostly revolving around OutOfMemory errors, crawl errors, using PRIME-derived features on the lists, exporting to Excel, breaking the Grid-view, and so on. So the list size limitation is real, if not a "hard limit." • SharePoint Workflow Nuttiness, Volume 1 - My initial foray into SharePoint Workflow development ended in pain, where I had to scrap an entire approach because Workflow doesn't support state machines with replicator activites. Then I read Ayende's JFHCI series and it poisoned me forever against WF. I wonder now what problem Workflow attempts to solve, and why don't we just use a pure code solution instead? Note that Ayende wrote a book on DSLs (Workflow is a form of a DSL), so don't just pretend he's some crank with a blog*. Final note: this is one of my top 3 most visited posts, so apparently lots of people have run into the specific issue. *I'm aware that by definition, I'm a crank with a blog • Illegitimate ErrorWebParts - a crazy solution to a crazy problem—here I use the "crank the chainsaw a few times" metaphor to describe loading a SPLimitedWebPartManager. Really, this is bad. • Dingoes stole my babies! - wherein I discuss a problem with moving wiki content via the PRIME API. • SharePoint awesomeness: User Profiles - wherein I discuss a potential benefit of functioning User Profiles. Unfortunately this post was premature, because the scenario I envisioned/laid out in the post wasn't possible out-of-the-box. Oops! Another framework limitation. • SharePointPdfIcon project - wherein I announce my (failed) CodePlex project. It works great for single-server farms, incidentally. I just can't be bothered to spend the time to write all the timer job junk to make it work on multi-server farms when even this souped-up solution won't work when someone adds a new server to the farm after activating the Feature…this is one of those cases where using the SharePoint deployment framework causes more pain than deploying changes the "vanilla" way. Ugh. PowerShell + SharePoint • Find the DLLs - After determining that Lutz' (now RedGate) Reflector is a core tool for SharePoint development, the next step was acquiring the DLLs from wherever they lay. Enter gratuitous use of PowerShell to solve the problem. • PowerShell is Magic: Part 1 - wherein I demonstrate PowerShell calling STSADM but also calculating on the fly. PowerShell is really, really useful. • PowerShell is Magic: Part 2 - wherein I describe (poorly) how the PowerShell REPL is powerful. • SP + PS: Working with wikis - wherein I give a pretty weak (but like the movies say, based on true events) example of how I use PowerShell to solve problems. • Why PowerShell: Readability - wherein I take another shot at explaining how PowerShell launches processes (console apps)…and get the explanation wrong. I should probably post an update or something. Also, PowerShell can be made to be readable (though like Perl can be made to be abomination). • PS + SP: Cornucopia - wherein I list all the real-world uses I've found for PowerShell working with SharePoint. PowerShell is uniquely useful for SharePoint, because SharePoint has a) an incomplete admin UI, b) huge object model that's loaded into the GAC, c) incomplete MSDN documentation, necessitating experimentation, and d) so much XML! Probably other reasons, but those are the big ones. Also, Visual Studio post-build tasks are the devil. I'm now ashamed of me of a year ago. Shame on you, 2008-me. • Useful PowerShell functions I've written. I've looked at others' PowerShell functions and I think it's a lot simpler to do away with logging, comments, object disposal, and attempts to improve performance. All things are appropriate in context, but for me, these are mostly throwaway ad-hoc scripts, and are thus simple and focused. • Write-ListDetails - particularly, discovering (and recording useful information about) large lists—and remember, this is PowerShell, so you can pull ANY data you want, no matter how complex the criteria or where the data originates. • Run-Query - think of this as a REPL for your SharePoint Enterprise Search SQL queries. Returns pretty objects, not a DataTable. • Get-CrawlHealth - I used this to prototype the functionality I wanted, then built it into a _layouts page. The script works though (with the exception of the$contentSource.CrawlCompleted property, which is inaccurate and worthless)
• Update-SearchScopes - on demand! You can't do this via the UI.
• Get-UserProfile and Get-UserProfileData - the first function retrieves the UserProfile object, the second function maps the (nigh-impenetrable) property collection to real properties. Useful for bulk data export and for examining your user profile data in a meaningful way.

Informational (knowledge, not concepts)

• Briefest introduction to GetListItems using CAML and lists.asmx - by now there are much better (and more accurate) guides to GetListItems. What may be amusing to you is the comments I leave on each line of code—wherein I document how uncertain I am of what each element does. The MSDN documentation has improved since 2007. As a small bonus, I'll note that this runs against WSSv2, not SharePoint 2007.
• Don't delete the default app pool - nitty-gritty details on IIS configuration. Note to anyone who has rolled out a SharePoint farm: congratulations, you're now qualified to roll out any ASP.NET app! Personally I'm pumped this knowledge transfers.
• Firefox supports auto-NTLM logins - most of you aren't aware that you can use Firefox and visit your SharePoint sites, and not be so aggravated by login boxes—Firefox supports automatic NTLM authentication in a manner similar to IE! Follow the directions to enable it.
• Cisco NLB setup in SharePoint - because I'm still the only resource for this in the entire world. Ridiculous.
• SharePoint search - faster than you think - wherein I complain about how slow IE is and how it is to blame for many of SharePoint's "performance issues." Honestly, it's true—try loading SharePoint pages in Firefox, they're way faster. Also it helps if the page doesn't load 1MB of JavaScript and another 1MB of inline style text.
• SharePoint Timer Jobs - here I attempt to shift from unhelpful ranting, to a post designed to help others avoid pain. I'm happy to say this is one of the top 3 posts, and hopefully it's helping people. Specifically, I mention that Timer Job updates require the manual reset of each Timer service on each server, and provide a script to quickly reschedule a timer job. Small footnote: I would rewrite the PowerShell script today such that it was a single function that takes arguments instead of requiring customization of the script. Functions are self-contained, and can easily be pasted into a PowerShell window (e.g. a PS window running remotely on a server!) without accidentally executing anything.
• Project retrospective on my People Search project - raw stream of consciousness, in bullet-point form. I didn't want to spend much time prettying it up, but reading the list of limitations, recommended customizations and preferred AD setup can save you weeks (and pain!) on your People Search project.

Cruft

• My SharePoint search page - to be clear, this is a static HTML page I made with search boxes to to search Google, search USENET, search the Technet forums, and search Google Reader. It's mostly broken now, and eventually I'll take it down. I used it A LOT while doing farm architecture-y type work, and used it heavily when troubleshooting in the early days. Now that I'm more development-focused, I've found I don't use it. Ever. Takeaway for everybody: Technet forums search covers more than Google does. If you're desperate enough, search both Google and the Technet forums (called MSDN Social now?).
• SharePoint search page - hottest of the hot! - wherein I add hotkey support to my (now-defunct) search page.

Op-ed (opinion pieces with almost no useful, actionable content—sorry)

• Dear MSFT, please talk to your Office division - op-ed, Sorry. Summary is, please don't obfuscate all your DLLs. Side note: InfoPath is pain.
• One Language A Year - wherein I dedicate a year to learn C#—that is, actually learn C#. I'll dig into Scala/Clojure/Haskell/Ruby/Python/Lisp/Scheme/Erlang/JavaScript/Io/Factor/OMeta/Smalltalk some other day. Also, I outright deny the claim that you should learn one language a year. It's cheap to give advice. It's not as cheap to follow advice. I have a new rule on following advice: does the person giving the advice actually do what they say? I got similarly disgruntled when "Uncle Bob" said something to the effect that you should dedicate 40 hours to work and 20 hours to learning. That's just crazy talk.
• SharePoint wikis are awesome, I swear - another of my top 3 visited SharePoint pages. I now apologize for defending SharePoint 2007 wikis. Afternote: I wish this wasn't such a popular page. Of all things, a wiki op-ed piece is one of my top pages, ugh.
• SharePoint: not unit testing - I've waffled a bit on this one. My current stance is that I'd really like to do continuous, automated functional testing (i.e. drive a browser window with code) to give me confidence my SharePoint solution actually works. True unit testing wouldn't cover enough space to give me confidence in my project, and most of my SharePoint projects are tiny, such that the "designing your API via design-by-example TDD" argument for TDD doesn't apply. Also, read this post for a short anecdotal survey on what kind of problems I run into when developing SharePoint solutions.
• Say no to makecab.exe - Here I rant against using makecab. I think I had just read yet another MSDN article that made casual use of MAKECAB.EXE and pretended like it was a good idea. Also I apparently just read the CodingHorror post on "Strong opinions, loosely held" which I now think is a terrible formula for my blog posts. At least I include a somewhat-useful PowerShell snippet that bypasses makecab, that's something.
• Surviving your first SharePoint project: Part 1 - wherein I sloppily argue that WSPBuilder is superior to STSDEV, VSeWSS, and makecab. It's true though, and somebody's got to counteract all these MSDN articles and books that pretend WSPBuilder doesn't exist…
• Does this describe you? - short, unhelpful post that quotes Niklaus Virth and laments SharePoint's accidental complexity.

SharePoint 2010/SharePoint 14 predictions

• SharePoint 14: Everything we know - it turns out from what I've heard from SPC09, this was dead-on accurate. They kept PowerPivot silent through the NDA period. Interestingly, SPC09 was silent on "Bulldog", the MDM product Microsoft purchased. Also I apparently missed out on the TownSquare bits, which they publicly discussed, and which evolved into the Facebook-like features.
• Preparing yourself for SharePoint 14 - I'm proud of my track record here, because I nailed pretty much everything. Written a full year ago.

"Other" category

• Yet another SharePoint VM: RIP - there was a period of time where I was Doing Something Wrong with my VMs. I now blame either/any of: a) saving state/restoring from saved state in Virtual Server and Virtual PC, b) running my external USB hard drive off of laptop battery power, c) lots of plugging and unplugging of said USB hard drive. I haven't had a problem in a long while now. Takeaway: back up your VM every so often "offsite", just in case.
• ASP.NET MVC is a MAGIC FLYING CARPET - wow, it's been two full years since the announcement! Anyway, here I mention how SharePoint development feels like alchemy sometimes, and separately, how the SharePoint developer community doesn't seem to value the things I like about ASP.NET MVC. Posting this had the side effect of sending lots of poor souls to my blog from google searching on "how to create an ASP.NET MVC app inside SharePoint."
• SUGDC Conference 2008: Recap - wherein I give a similarly-huge recap of each session I attended. Also: layoffs drive big SharePoint adoption! So, get with the layoffs!
• SharePoint + ASP.NET MVC - wherein I troll for people searching for these keywords.
Categories: SharePoint
Technorati:
Wednesday, October 28, 2009 7:01:27 AM UTC       |  Comments [0]  |  Trackback
Tuesday, October 27, 2009 3:59:44 AM UTC

This is a two-parter. The first part is to say, hey, look at this sweet hack I've discovered in the Oxite source*! The second part is to ask, hey, is this a good idea?
* the refactored Oxite source, that is

### Background services

First, let's give a little detail here—background services are long-running tasks that Oxite needs to run periodically. These are things like sending emails and sending trackbacks—necessary, certainly. But, they shouldn't be running while some chump stares at his Netscape window waiting for the site to finish sending 1000 spam trackbacks. He should be able to post to his blog, receive an immediate response indicating the post is now available, and the trackback spamming can commence later. Background services are the things you can put off, the things that don't have to finish before sending a response to your website patrons.

These background services are called by many names—I've heard cron jobs, timer jobs, background jobs, jobs, "the heartbeat," services, and tasks.  In Oxite they're called background services.

### Look at this sweet hack!

The full source is below, but I'll attempt a walkthrough of the solution here. First, to explain the problem: we must achieve the impossible—we must somehow emulate a continuously-running Windows service inside an IIS worker process. This means we must periodically trigger jobs to run, but we can't monopolize valuable worker threads. And we certainly can't delay responses to send 30000 spam trackbacks. We've got to run, but we can't run anywhere in the ASP.NET page/request lifecycle! It's a conundrum.

What the Oxite team has done to achieve the impossible is, plainly, to cheat—they use a System.Threading.Timer.

How they manage the impossible is a lot like juggling—magic juggling. Enter stage left: Oxite, the juggler. Oxite takes a background task and throws it in the air. He takes hold of the next background task (let's start calling these things bowling pins) and throws it into the air, and moves on down the line. Before anyone knows what's happened, Oxite has gathered up all the bowling pins, thrown them all into the air, and made his getaway. Unlike most jugglers, Oxite makes no attempt to catch bowling pins once thrown! And this is why it's magic.

Let's try to break this back down into code. When Start() first executes [line 28], the Timer object sets a callback without halting progress [line 43]. This is the juggler throwing a pin in the air.

The callback method is eventually invoked. A thread is spun up* and runs the designated timerCallback() function [line 56]—and, let's make this clear—timerCallback() doesn't block the original Oxite web request; it lives in a new thread. And this new thread does its first dose of work, as shown on line 68 (SPOILER ALERT: it calls Run()).  We're not interested in what Run() does exactly—for today it must remain a spooky mystery, go look it up yourself.
* precisely how the thread is spun up is in fact, real magic, or might as well be to my superstitious caveman brain

Ok. Here's where the "magic" part of magic juggling comes in. Because any dunce can throw bowling pins, and any dunce can catch them, and any dunce, with practice, can juggle. The magic here is inside the timerCallback() method, where the Timer once again sets a callback. Each time a background service awakens, it does its work and, before going back to sleep, sets up the next callback with another call to timer.Change() [line 75]. That is to say, each time the bowling pin makes as if to land, it spins back upward into the air!

So there you have it. Oxite takes a bunch of bowling pins, throws them all into the air, and leaves. As the pins drop down to the ground, the "mystical Timer callback juggling force" propels them back into the air.

And we're running background threads in the web process. Sweet.

### Now the question is: is this a good idea?

Now you understand how background tasks work in Oxite—or can now juggle. I get confused sometimes. In any case, congratulations!

Assuming I'm not misrepresenting anything, this is how background tasks work in Oxite. So, now for the question. Is this a reasonably acceptable way to set up background tasks for a site? I've discussed it some on twitter, but is there anything particularly nasty I've missed? Will it kill the process? Will it hang all 25 threads? Or some large portion of them?

I'm curious to hear if anyone has taken this approach, and what their experiences were.

### Full source

1 //  ————————————————

3 //  This source code is made available under the terms of the Microsoft Public License (Ms-PL)

5 //  ————————————————-

6 using System;

8 using Microsoft.Practices.Unity;

9 using Oxite.Services;

10

11 namespace Oxite.Infrastructure

12 {

13     public class BackgroundServiceExecutor

14     {

19

20         public BackgroundServiceExecutor(IUnityContainer container, Guid pluginID, Type type)

21         {

22             this.timer = new Timer(timerCallback);

23             this.container = container;

24             this.pluginID = pluginID;

25             this.type = type;

26         }

27

28         public void Start()

29         {

30             IBackgroundService backgroundService = (IBackgroundService)container.Resolve(type);

31             IPlugin plugin = getPlugin();

32             TimeSpan interval = getInterval(plugin);

33

34             if (interval.TotalSeconds > 10)

35             {

36 #if DEBUG

37                 if (plugin.Enabled)

38                 {

39                     backgroundService.Run(plugin.Settings);

40                 }

41 #endif

42

43                 timer.Change(interval, new TimeSpan(0, 0, 0, 0, -1));

44             }

45         }

46

47         public void Stop()

48         {

49             lock (timer)

50             {

51                 timer.Change(Timeout.Infinite, Timeout.Infinite);

52                 timer.Dispose();

53             }

54         }

55

56         private void timerCallback(object state)

57         {

58             lock (timer)

59             {

60                 IBackgroundService backgroundService = (IBackgroundService)container.Resolve(type);

61                 IPlugin plugin = getPlugin();

62                 TimeSpan interval = getInterval(plugin);

63

64                 if (plugin.Enabled)

65                 {

66                     try

67                     {

68                         backgroundService.Run(plugin.Settings);

69                     }

70                     catch

71                     {

72                     }

73                 }

74

75                 timer.Change(interval, new TimeSpan(0, 0, 0, 0, -1));

76             }

77

78             //TODO: (erikpo) Once background services have a cancel state and timeout interval, check their state and cancel if appropriate

79         }

80

81         private IPlugin getPlugin()

82         {

83             IPluginService pluginService = container.Resolve<IPluginService>();

84             IPlugin plugin = pluginService.GetPlugin(pluginID);

85

86             return plugin;

87         }

88

89         private TimeSpan getInterval(IPlugin plugin)

90         {

91             return TimeSpan.FromTicks(long.Parse(plugin.Settings["Interval"]));

92         }

93     }

94 }

Categories: .NET | ASP.NET MVC
Technorati:  |
Tuesday, October 27, 2009 3:59:44 AM UTC       |  Comments [3]  |  Trackback
Tuesday, September 08, 2009 12:32:24 PM UTC

I'm not going to get into specifics; instead I'm just going to say that my project, today, is painful to deploy. And not only is deployment painful, it's error-prone. And I don't mean "error" in the hypothetical, higher-percentage-than-that-other-hypothetical-percentage sense of the word; the sanitary, almost clinical sense. I mean, oops, there went four hours troubleshooting when I deployed the wrong DLL, preventable kind of error.

So it hurts.

There's lots of pain in the world of software development, but it doesn't have to be this bad. All I need to do is, in the beginning, set aside some time to deploy an empty shell of a project. When I say empty shell I mean, almost literally, a Hello World type of application. If this Hello World application involves seven databases, twenty seven service accounts, a network load balancer and forty web.config files, so be it. If the deployment requires granting of security permissions to these twenty seven service accounts, so be it. Sure, it's going to seem useless, and the tangible payoff will be minimal. Painful? Error-prone? Bring it on. Bring it on at the beginning.

And please, for your own sake, one-click automate the deployment! If nothing else, automate the happy path, which is orders of magnitude easier than building a fault-tolerant deployment. Worst case, your automated deployment fails and you're back to manually deploying. In other words, if you're deploying manually, then you're already living out the worst-case scenario.

And by you, I mean me, today. Again, not hypothetical.

### Happy path/sad path by example: copying a folder

Happy path:

1. Copy a folder and contents to a destination directory.
2. You're done! Congratulations!

• Does the source file exist?
• Is the source file unlocked for read?
• Does the destination folder exist?
• Or its parent folder?
• Or its parent?
• Or the parent drive?
• Or parent network share?
• Or maybe you need to connect to the network share with a different service account?
• So this means you need to explicitly drop all current connections.
• Can we drop (delete) the existing destination folder?
• Is the folder locked?
• Is this because we have an Explorer window open to the folder (sooooooooooo common for me)
• Are we overwriting a file?
• Do we have permissions?
• Is the file locked?
• What if some of the file copy operations succeed, but not others?
• Do we have a perfect backout strategy?
• Can we restore the original folder in its entirety?
• If not, can we restore each individually changed item in its entirety?
• Are we running in a transaction?
• Are all our options atomic?
• Do we implement a transaction log of sorts? How do we know without a shadow of a doubt our operations succeeded?
• Did the virus scanner interfere when copying an .EXE?
• On the remote machine?
• On an invisible HTTP proxy on the network?
• Is the remove file share something crazy like WebDAV, where only some operations are supported?
• Are you sure you're running the WebClient service required to make this WebDAV/explorer integration work?
• Are the file+pathnames reaching the maximum allowable limit, and are you copying to a deeper subdirectory which would cause "too long filename" errors to occur?

IT as a career makes me paranoid—this is a ridiculous checklist for just copying a file. But I've experienced all of these things. Yes, it's ridiculous, and yes, it's real.

Categories: .NET
Technorati:
Tuesday, September 08, 2009 12:32:24 PM UTC       |  Comments [0]  |  Trackback
Monday, August 24, 2009 10:08:21 PM UTC

The PowerShell REPL is awesome.

PowerShell is by no means the only REPL. There's the immediate window in Visual Studio, the Snippet Compiler, LINQPad, the Interactive C# shell from Mono, and a REPL environment for most every other scripting language on the planet. Some of the TDD guys refer to "exploratory tests" that they write to learn about a third-party API. On the Regex front, there are scads of web-based and Windows-based tools to help you build and test regular expressions as fast as you can hit the "Run" button. I'll even accept writing a console application as a weak form of a REPL, though I wouldn't encourage it. All these things serve the same goal: give me instantaneous feedback. For those of you already familiar with the REPL, we're good, we're in the know.

But if you're the person who never uses a REPL, allow me to show you, using an example from just 3 minutes ago, how powerful they are.

### My burning question

All this began with a burning question: what happens in string.Format() if I place the parameters out of order? What happens if I use a parameter twice?

### Conclusion stated in words

I answered a specific question about .NET's string.Format() library function in less time than it would have taken to search and peruse the search results. Sandboxes such as these reduce the friction and enable me to run a series of experiments as quickly as I can think of them. Good REPLs (like PowerShell) allow me to a) quickly get feedback on my input commands, b) format and parse the resulting objects into a meaningful answer. Bad feedback loops (things that aren't REPLs) require overhead to even run, deliver feedback in over an hour or even as late as the next day or the next week, and deliver meaningless answers (think huge log files). I'm just here to make sure you're all aware: you have a choice: you can choose a REPL, or you can choose awfulness. Your call.

### Oh yeah, what's a REPL?

Categories: PowerShell
Technorati:
Monday, August 24, 2009 10:08:21 PM UTC       |  Comments [0]  |  Trackback
Saturday, August 22, 2009 6:55:47 AM UTC

Hello! I'm Peter and I'm here to present another sweet, sweet linkblog post. In my first link-heavy post, I pulled any links I could remember, from the previous week, year, or decade, a 'best of' of sorts. So don't expect great things from this sophomore effort.

To give the standard introduction: I've pulled anything tangentially related to software development in the .NET space into this linkpost, salted each link with commentary, and grouped into sections. I'm not an authority on most of the articles I link to, so when commenting on them will try to restrain myself.

Random topics

• Reminder: Virtual ALT.NET livemeetings - it's as easy as 1) typing snipr.com/virtualaltnet into your browser, and 2) entering the LiveMeeting. If you want to participate (encouraged), get a cheap mic/headset of some kind. There are three weekly meetings: Australian Monday night, Wednesday night US, and the Central-time Brown Bag meeting on Thursday. I'm definitely a fan. Also, don't forget: VAN records their sessions. Between this, NDC, and the various Ruby conference videos I've found, I'm never looking for something to watch.
• Designing Effective Dashboards [PPTX] - while I hate buzzwords as much as the next guy, and for that matter, while I hate browsing PowerPoint presentations, this has managed to overcome both its enterprisey roots and its PPTX medium! It's a PowerPoint presentation about Business Intelligence, which means I should be falling asleep just writing about it, and yet, here I am! Trust me, it's worth browsing. For a teaser, here's a self-explanatory slide:

Business Intelligence is neat-o! Wait, did I say that?

• The circle of no life:

Every now and again I need reminding.
• The Bipolar Lisp Programmer - this is partially about Lisp programmers, but also partially about programmer attitudes and, lacking a better word, 'psychology.' I don't like that word. Anyway, the article isn't necessarily based in hard science, but hey, it's a fun read, and you may just recognize something of yourself in it.
• Optimism - the three axes of optimism (or pessimism, if you're one of those people)—interesting because it provides an algorithm to make you more optimistic. Which is a good thing. And yes I said algorithm, which is why I'm linking to it.

Software engineering topics

My ongoing obsession with learning, which is arguably the skill software developers need most

• Making TDD Stick: Problems and Solutions for Adopters - for those of you trying to teach others TDD, read this article for sage advice. For those of you new to TDD and frustrated by how weird and difficult TDD is, read this quote:

Test Driven Development can be very hard to learn. The learning phase (the time during which it becomes a deeply ingrained habit) typically lasts from two to four months, during which productivity is reduced [2]. Eventually the benefits will be obvious and the technique is usually self-sustaining, but the question is: how to get there? Many developers give up after only a few days.

…then go check out the article to see what tips they have for making learning easier.
• The 7 Phases of Unit Testing - step 1 is "Refuse to unit test because "you don't have enough time."
• We have no class (a personal journey of not learning OO) - here's some honesty about how learning object-oriented programming is a large undertaking. I'm there with you man, I'm there. I'll say that though I "get" the SOLID principles, that building systems of lots of interacting objects is messy. Procedural-based refactorings (e.g. splitting a huge method into smaller methods) are almost always straightforward and can even be measured (see cyclomatic complexity), so you know when you're on the right trail. On the other hand, when you split responsibilities into multiple classes, you get into the world of, yes, I'm going to use the word, brace yourself, here it comes: hermeneutics. Using words I understand: it means that object-oriented design is messy and mushy and it's difficult to say objectively whether a design is better or worse than competing designs. And no, I'm not going to acknowledge the pun, if you noticed, well that's your problem.
• Under Your Fingers [5 min of QuickTime video] - here Corey Haines exhorts us to engage in deliberate practice. Deliberate practice. It's an important concept, and it's something every programmer seems to be lacking. Wax on, wax off.
• The difference between derivation and innovation - Oren gives a rule which explains why so much of the new hotness in .NET is uninteresting to me. Azure, for example, is the same thing you're already doing, but in the cloud…a derivation. See, that sounds better than "it's boring" or "I hate it," right?

My ongoing obsession with TDD, unit testing, and/or producing quality software in general

• Classifying tests - good article that disambiguates unit, integration, and functional tests. Most of you reading this have an incorrect understanding of what a unit test is, and yes I'm talking to you. I'm not going to write your name here in the post, because that would just embarrass you, but trust me, it's you. So read the article :)
• A testing survey on a large project - the title isn't flashy linkbait like "40 Reasons I Can Make You Click This Link,"  but bear with me—this is one of the few places I've seen unit testing, integration testing, and acceptance/functional/UI testing put into a kind of cohesive whole. Being still in the stage where I'm forming a usual style, this type of post is great to gain a sense of perspective.
• Test Review Guidelines - Art Of Unit Testing - Roy Osherove has posted the test review guidelines from his book on this page. Most of the guidelines are unsurprising, excepting the one interesting point he makes about overspecifying tests. These are subtle points and as I struggle with this issue, it's good to read his concrete rules on avoiding overspecification.
• Unit Testing with functions that return random results [Stack Overflow] - This issue has ruined (sorry, produced valuable learning experiences during) no less than two full coding dojos, and has almost derailed a third. Now, every time we even catch a whiff of randomization, it's "uh-oh, let's break all the dojo rules and just work past the issue." So I find it satisfying to see that it's not just us.
• Evolutionary architecture and emergent design: Test-driven design, Part 2 - I'm only interested in the claim in the first section under "moist tests" that "DRY doesn't apply to unit tests." I'm suspicious that those following the "moist test" philosophy are classical TDDers and those who disagree are mockists. Yeah, I didn't make those terms up, see this Martin Fowler article. In fact:
• Mocks aren't stubs - from Martin Fowler. I found this looking for the difference between mocks and stubs, and ended up with an excellent bit of perspective: TDD advice from classical TDDers and TDD advice from mockist TDDers won't always agree, especially in the teensy details like "moist tests" above. Also, coding dojos.

Counter-counter culture (wherein we get to 'why' answers)

• The usual result of Poor Man’s Dependency Injection - Chad explains why in his experience using Poor Man's Dependency Injection (look it up if you're interested) always ends in tears. Up to reading this post, I assumed that allegiance to IoC tools was one of those irrational tribal values, but now (after reading the post) I understand.
• Why unit testing is a waste of time - the "waste of time" title is more inflammatory than the post, but the core point is that unit testing is part of a balanced diet. Part.
Also, to be clear: unit testing is not a waste of time.

Architectures

• The Tale of the Lazy Architect - Oren describes a composable system. This sounds sweet.
• Submit this Form InfoPath - FAQs - and here is something of the opposite. This FAQ exemplifies why I don't want to ever do anything complex with InfoPath. Not to pick on Kathy the author, that isn't the point—the point is to show what kinds of awful workarounds and hacks you have to do just to make a field read-only, or to pull in a user's email address. For fairness, I should point out that this FAQ is for InfoPath 2007.

• Culture [flash rendering of PPT presentation] - the presentation isn't very fancy but the content is dynamite. Here's an analysis of the concepts presented. I particularly like where they discuss corporate values and how the values are often meaningless ("Enron had a nice-sounding value statement"). They've got crazier (crazy in a good way) stuff in there, including a discussion of how they fire average performers. It's almost utopian-sounding.
Categories: .NET
Technorati:
Saturday, August 22, 2009 6:55:47 AM UTC       |  Comments [0]  |  Trackback
Wednesday, May 27, 2009 1:08:01 AM UTC

I'm here today to present the case against a particular piece of NUnit's fluent syntax. But before I do, let's set up a concrete example, something that gives the test meaning. Instead of just writing something down in boring old plain text, I've sloppily remixed a work of art I found via an image search and retitled it "Rebellion Against the (overuse of unrelated, Creative) Commons(-licensed images)!":

Solid. Requests for ~/Default.aspx should redirect to the Home controller. Let's get on with the show.

### My Gripe: Assert.That syntax

Here is a comparison of the Assert.That syntax, the traditional Assert.AreEqual syntax, and the syntax provided by MSpec's NUnit extensions (MSpec isn't the only framework with these extensions, it's just the one of which I'm familiar):

I don't like the Assert.That syntax, in this scenario.

Look at all that. The new syntax is just…ugh.

I've read elsewhere that it's good because it reads like a sentence. Well, to shock you into elevated awareness, may I jog your memory of something else that reads a lot like a sentence? In case you didn't dare hover over that link, allow me to properly title it:

Check out the action in the linkage section!

Rock on, COBOL.NET!

Okay, conflating "reads like a sentence" to COBOL is a sucker punch, it's unfair. Let's do this by the numbers.

### Comparing the three ways to do the same thing

In all cases, the fewer, the better.

 Syntax Total chars Chars used by test syntax (by my measure) Total times Intellisense was necessary* Of these, times Intellisense couldn't help (e.g. Assert, Is) Assert.That 66 28 6 2 Assert.AreEqual 58 20 4 1 result.ShouldEqual 53 15 3 0

* "Intellisense is necessary" is roughly defined as "any point at which you can use Intellisense." Definition is left purposefully imprecise.

I could go on for a bit about this, but I'll let the fancy HTML table do most of the talking. If there are any takeaways from this oversized-for-its-topic post, let them be:

• The fewer magic syntax words I have to learn (e.g. Assert, Is), the easier a framework is to learn. By this measure the MSpec extensions are the best, and the Assert.That syntax the worst.
• The fewer characters typed, the easier a framework is to use. Terseness is better when it doesn't impact learnability.
• "Reads like a sentence" is presumably the means to achieve some other goal, not a goal in and of itself. If your fluent syntax doesn't help achieve…whatever that other goal is, reconsider trying to make your fluent interface read like a sentence.

Related counter-point I don't care about today:

• The fewer  extension methods attached to "object," the better.
NOTE: test frameworks and mocking frameworks get a pass from me because they are supposed to work against any object.

### Final note: Better vs Best

I'm complaining today specifically about NUnit's Assert.That(actual, Is.EqualTo(expected)) syntax. I'm not down on the Assert.That() syntax as a whole, just the most commonly used method. And maybe that's what's bugging me—Assert.That has a lot of great stuff in there, allowing fuzzier comparisons beyond simply .AreEqual(), but the most commonly-used scenario is measurably worse than the old syntax.

And the Assert.That(actual, Is.EqualTo(expected)) syntax isn't the worst thing ever. It's not the end of the world. But shouldn't it be better than the thing it's replacing, not worse?

Categories: .NET
Technorati:
Wednesday, May 27, 2009 1:08:01 AM UTC       |  Comments [1]  |  Trackback
Saturday, May 16, 2009 11:18:50 PM UTC

In the past I've questioned the viability of linkblogs—does anyone (including, and perhaps especially the linkblog author) have time to read all these articles?

The short answer is no. They couldn't possibly have time to read and evaluate all those articles.

I think it's become something of a cultural expectation that we scan each of the 50+ links in a daily linkblog post as a way of discovering something interesting, without having the expectation of, you know, reading anything. Inevitably the quality of the links degrade, because nobody's reading the articles. As for me, I'm batting .000 on following linkblog links this year…I'm in a kind of "linkblog hitting slump." Maybe it's just me.

This also goes for programming-related aggregators. First we had Slashdot, then briefly, Digg, then Reddit, then the front page of Reddit became something of a wasteland, so we moved to programming.reddit.com, then there was that thing called Hacker News. Somewhere along this timeline DotNetKicks reached critical mass, before slipping into the doldrums of all-ignorance-all-the-time .NET op-ed pieces; ugh. For the record I still think the aggregators do a good job, it's just that they could do better.

Where were we? Ah yes, links.

I've found the following links fascinating for some reason or other, and I personally vouch for them. If I haven't looked at the link, I'll point it out right there (which I do a lot in the "Books" section.)

Podcast series (AKA Super Podcast Roundup Turbo HD Remix)
Podcasts are roughly ordered by how much I like them…but note that if they're listed here, I like them. My first podcast roundup was in 2006.

• Stack Overflow podcast - having read both CodingHorror and Joel On Software, this one's a lot of fun. Revisit old topics, get their unfiltered take on newer topics. It's good to get the unfiltered opinion, even if they're uninformed from time to time.
• Deep Fried Bytes - I like their rusty washers segment. Maybe that's the pessimist in me, but hey, I'd prefer risking listening to an over-critical rusty washers segment over over-exuberant marketing talk.
• Herding Code - when there are four co-hosts, you get better questions, and the guest isn't allowed to spout FUD/ignorance for an entire episode like sometimes happens on DotNetRocks.
• Hanselminutes - I like the recent trend of doing "follow-up" shows to correct inaccuracies on other podcast series.
• DotNetRocks - classic, still going, and like the rest of 'em, DotNetRocks has both good and bad episodes.
• Software Engineering Radio - in theory I like this show, but I'll be honest and say I haven't listened in a long while.  My commute dropped from an hour to just 8 minutes, what can I say.
• Irregularly updated podcasts I enjoy:
• OOPSLA podcast 2008 and OOPSLA podcast 2007 - some of the best episodes/talks come from this podcast series. Hopefully we'll get the equivalent shows for their 2009 conference.
• Polymorphic Podcast - Craig's still going, several years later. ASP.NET/web development/object-oriented development topics.
• Elegant Code - I don't remember the last time they published something, but, hey, we're in the "irregular" section for a reason.
• ALT.NET podcast - just switched hosts, so we'll see where this goes.
• Rubiverse podcast - run by the former ALT.NET/now-Ruby guy. His shows are infrequent, but good.

Career-oriented (whether the career is freelancing, entrepreneurial, independent consulting, or even working as an employee)

• Daniel James - Building an Indie MMO (Puzzle Pirates) - this is (believe it or not) not much about making games, as it is about building a product. He explicitly mentions that you have to be extraordinarily productive. I'm not selling this well, but trust me, you'll want to check this out. Also, he wears a pirate hat.
• Archaeopteryx by Giles Bowkett - wherein he describes that he'd like to someday have Archaeopteryx (the open source app he built and loves) be his main job. Sometime late in the presentation Giles also says he'd like to describe himself as a "musician who happens to know how to program." It's an engaging laser show, fog machine and all, and as the InfoQ page says, "slides edited directly into the video since there were 500 of them." I don't agree with everything he says, but the career aspect of his presentation is something to think about.
• DHH (creator of Ruby on Rails; 37signals) at Startup School - apparently his talk immediately followed a VC who spent an hour describing how to get VC money. One of the first thing he says is "you don't need VC money" and explains why working in a VC-funded startup is like playing the lottery, instead explaining that you should follow his revolutionary advice and "charge money for your product." Engaging/entertaining, and a lot of straightforward wisdom.
• Do the Hustle, by Obie Fernandez - straightforward talk on the business aspects of independent consulting for Rails folk. Most of this applies to the rest of us.
• ajmoir's description of a hyperproductive software team - this is a Reddit conversation with multiple threads, so for the full story you've got to read all his replies. I think this is important for everyone to read because you need to believe software development can be done, for lack of a better term, "way better." The promise of hyper-productivity is fascinating. Also: Lisp.
• Mark Cuban on Success and Motivation (long, mostly storytelling)
• How to become a famous Rails Developer, Ruby Rockstar or Code Ninja - I haven't watched the presentation, but I did read his transcript. Also you may be interested in the RailsConf video feed…I haven't found anything else I'd recommend.

• Ten Ways to Screw Up with Agile and XP - this presentation is a kind of response to the "post-agile" idea. Like the "post-agile"stuff we're beginning to hear about, he talks about how Agile projects and teams can fail. Unlike "post-agile," he doesn't blame Agile, instead focusing on solutions for the ten common problems he encounters. Highly recommended, especially for those not sold on Agile. Ben sent me this link some time ago.
• Virtual ALT.NET meetings (ongoing) - these are the in-depth presentations I've been looking for.  You can listen in live to any meeting…just plug in a working headset, go to http://snipr.com/virtualaltnet … and that's it. Also, they record sessions! Awesome! http://www.virtualaltnet.com/van/Recordings On my queue of sorts:
• Øredev 2008 videos - I'm digging through these presently. Unlike most conferences, Øredev has provided videos for each track (i.e. "the breakout sessions")! Awesome! Find the videos either
• Haven't watched - Lang.NET symposium talks. I think I'll check out the two PowerShell videos and then bail, I mean, hey—I've got plenty to check out without delving into programming language design. But, enough about me: you may find some of the other talks interesting. Big ups to Microsoft for publishing the videos.

Books (.NET development-related)

Code camps/Saturday developer events (Houston area, sorry everybody else)

• Austin Code Camp - May 30th, 2009 (soon!) - check out the hot hot hot session proposals! Hot!
• FOSS in Healthcare unconference - July 31st - August 2nd, 2009 - costs money, but maybe it's worth it to you.
• Houston Techfest 2009 - September 26th, 2009. This is the day of the Texas Tech at University of Houston game, on the University of Houston campus. Techfest: est. 600 attendees. Football game: ~30,000 (it's a small stadium). I think we'll have a crowded campus that Saturday!

Everything else

• Using Photos to Enhance Videos - this is one of those jaw-dropping demos. Click click click click click click click.
• Fred Trotter on the "VA VistA Underground Railroad" and how our US government should spend its Healthcare IT money on open source - Healthcare IT has a problem, and I hope an open source ecosystem is a solution. This article is long and gives a lot of history, so you'll get something out of it even if not interested in the politics. Also the links to the VA VistA Underground Railroad were fascinating; folks interested in Behavior Driven Development would be interested by the stories about how "a programmer sits down with a clinician" to write the app. Fascinating for a lot of reasons.
• While we're talking about BDD, you might be interested in the David Parnas' keynote at OOPSLA, wherein he lays out eerily similar goals (see the section on Documentation.)
• The underhanded C contest - 2007's underhandedly-weak encryption contest: "Your challenge: write the code so that some small fraction of the time (between 1% and 0.01% of files, on average) the encrypted file is weak and can be cracked by an adversary without the password." Make sure to look at the criteria for bonus points, and of course, the winning submissions.
• "Is anyone else here worried that they've spent so long looking briefly at everything, that they've still good at absolutely nothing?" - you don't have to click the link, just acknowledge the point. This reddit post has 1000 upvotes.
• Scott Berkun's Project management for beginners (post is short!) - because, aren't we all beginners? You don't see this kind of straightforward talk from the PMBOK (if you do, it's sandwiched between "effectively denying reality" and "having long status meetings." In other news, I think I have a rebellious attitude towards the PMI, judge for yourself.
• Abstract architecture-y type discussion - Design and Develop Versatilities, Not Applications - focus on the idea of what he calls "versatilities," and not so much on the specific technology involved (i.e. SharePoint.) I think it's a noble goal, but no, SharePoint in its current form can't realize the lofty goal he sets forth. Sorry, no. As I said elsewhere, you'll get far more mileage by training your power users to build their own SQL queries and how to use pivot tables in Excel. But the ideal is good.
• Programming Sucks! Or At Least, It Ought To - Alex (the author) runs thedailywtf.com. I don't know what to say about this. Every programmer needs to find the balance between getting real work done the ugly way, and spending time learning new techniques that make the ugliness go away. I haven't found this balance. This goes in hand with Alex's other classic article, Pounding a Nail: Old Shoe or Glass Bottle? - and carries the same assumption that you must live with your (bad) programming environment.
• All this SharePoint Stuff is Going to be Normal Soon - a lot of people see SharePoint as the next "Microsoft Web OS," i.e. that the SharePoint trend will accelerate, and that we'll start to see every future web-based product from Microsoft (and products from other vendors!) run on top of SharePoint. As it is today, the easy answer is "no that's not going to happen," because the cost of running your complex app on SharePoint can't be justified. And for tomorrow the answer still looks to be "no that's not going to happen," because I don't see any fundamental changes taking place. Non-trivial add-ons today write their data to their own database, making their "SharePoint integration" more lip service than truth. I've thought about what I'd like to see in an application framework, and if I could summarize, the one thing SharePoint doesn't support that it needs to: it would be nice if it allowed deep customizations that the product team did not anticipate. I think this is the fundamental problem which at this point is unsolvable for SharePoint. Solving this problem would require re-inventing SharePoint into something that doesn't resemble the SharePoint of today.
But, who knows, I could be horribly wrong about all this.
• Discussion about Microsoft Gold Partners, titled "Why Your Vendor Screwed Up Your SharePoint Project" - wherein the author (gently, ever so gently) points out that Microsoft needs to change its partner ecosystem.
• How to call BS on a guru - again Scott Berkun. He writes books by the way :)
• The DailyWTF programming contest entry (a calc.exe replacement) which is built entirely in C++ templates. I can't tell you what kind of respect I have for that kind of compiler abuse.
• News: Clojure 1.0 - dismiss this at your own peril. Related: ClojureCLR alpha up.
• Is mutation testing useful in practice [StackOverflow question]? I'm reading through Kent Beck's TDD By Example, and he mentions mutation testing. Years later, it seems like no one's talking about mutation testing. Are we doing something else to test our unit tests? Is this too much overhead? Have we adopted a new mental framework that eliminates the need for mutation testing? Anyway, there's your new-old idea for the day: mutation testing.
Categories: .NET
Technorati:
Saturday, May 16, 2009 11:18:50 PM UTC       |  Comments [0]  |  Trackback
Friday, March 27, 2009 4:48:49 AM UTC

I'm here today to relay two messages:

### First: I'm still alive and well

I haven't posted anything of real substance in quite a while (and some of you in the back of the room are shouting "in a while…or ever!" I can hear you.) I'm not here to promise more frequent and meaty updates; instead, I'm here to say that you can expect a lot less from me, at least on this blog.

My growth-as-a-developer plan (I introduced it in detail here) is going full steam. While I'm not on track to hit all specific targets, the most important thing is that I'm seeing real growth. The bits that have been most helpful for me have been a) writing my own mini-project, and b) reading source code.

I'd like to emphasize how drastically this has changed my outlook. First, reading others source code gives me self-confidence. And yes that's somewhat mean, I know. But it's true, and I try to beat the "you are adequate" drum as often as possible—by reading others' bad source code, you'll better know where you stand. Sometimes you realize you've got a lot to learn; sometimes you realize that hey, you're not all that bad, relatively. Bad source code can be inspiring in its own way.

And let's pull this around to the positive—I've learned a ton reading others' source code. I've picked up lots of little nuggets like using params[] as a method argument, and bigger nuggets like the several different styles of context/specification-ish unit tests. I shouldn't have to explain this; it should be self-evident that one can learn by studying source code. Duh.

#### Lesson: have a side project!

But more helpful even than reading others' source code is simply getting out there and writing my own. And I don't mean the type of stuff I do at work…let's not go there today. I mean code that is almost 100% logic; data stored in List<T> and passed around as IEnumerable<T>. I don't have a database. I don't have a UI. My project is entirely useless at this point, and will remain useless maybe forever.

But I'm learning a ton! What's great about building my own project is that I'm able to focus on learning specific topics. My focus points for this project are:

• OO (I'll flesh this out further when I know what it means)
• Test driven development (not just unit tests, but actual test-first, drive-out-the-design via tests, TDD)
• Context/specification-style tests

Along the way, as a kind of bonus, I've picked up:

• LINQ to objects - replacing for loops and foreach loops with LINQ calls. Related and also learning: nuttiness featuring delegates.
• Rhino Mocks AAA syntax - with the exception of method argument constraints. If someone wants to show me a good example of using constraints, I'd be ever so grateful—just a link to a project where someone's using RM constraints will work, I'll find it from there.
• NUnit/XUnit/MSpec - in that order, and yes, I switched all my tests over, and yes, the process was ugly. Also you can't claim to know NUnit if all you know is the [Test] attribute and Assert.IsTrue().

What's most important about this whole 'writing my own side project' experience is that it is fun, and I had, and continue to have, the energy to keep at it. I'm never motivated to do self-directed learning, so this boost of energy is the biggest win. If you're one of those people who can't imagine this kind of thing could be fun, well, maybe it's time to try out a side project.

#### Everything else has suffered

Everything else has become unimportant. Learning the newest wave of MS technology isn't even a concern at this point; I'll pick it up when I need to, or when my side project calls for it. What's surprising to me is that even ASP.NET MVC, which I happen to like, is being shunned with the rest of them.

Also, blogging has suffered. Also, my book reading has suffered.

But who cares about all that, really. I'm learning a ton, and you can't stop me!

#### Everything in Balance

I should clarify: I'm coming to this concept as the podcast junkie/blog consumer/programming aggregator consumer person, who didn't have a side project. I've been at it (this side project) a few months now.

So if you're thinking my advice is unwise, that's fine—I'll take this space and make a disclaimer: I intend to use common sense, and re-evaluate my learning strategy from time to time. In particular, I  do intend to read books in the future, hopefully the near future.

Just not right now.

Way at the top I told you I had two things to say tonight. First was the message that everyone needs to start their own side project even just to help them learn.

Second is to tell you that I'm on twitter. Believe it, twitter.com/pseale. Subscribe! Do it!

Something I've found amusing is that I enjoy reading my own twitter feed. It's either a sign that my tweets are engaging and are chock-full of hilarity and insightful content…or that I like smelling my own aroma.  You be the judge!

Here's a sampling of my twitter bouquet:

Posted update to my SP unit testing blog entry: http://www.pseale.com/blog/SharePointNotUnitTesting.aspx - summary is "learn OO first"
Thu Mar 05 16:26:41 +0000 2009
I should point out I updated the post because @jopxtwits linked to it at http://tinyurl.com/af9tnc
Thu Mar 05 16:29:17 +0000 2009
TDD in SP projects is a gun rack: http://www.pseale.com/blog/SharePointNotUnitTesting.aspx - yes I just re-updated my own post
Fri Mar 06 03:29:57 +0000 2009
In most recent episode of my ongoing "My Tests Suck" series: just found out I forgot to wire up events, 'the hard way'. No test failed,oops
Fri Mar 06 03:56:11 +0000 2009
Just hit CTRL+SHIFT+B on my Firefox window, out of habit. In other news, I've hit the 50 test mark.
Fri Mar 06 05:43:10 +0000 2009
Oops another bug, not covered by tests. Hopefully I'm learning by experience, emphasis on the word "learning"
Fri Mar 06 06:10:58 +0000 2009
I'm dead serious when I say that using PowerShell to explore SP central admin/SSP is faster than using the browser, esp. w/ 30 sec compile
Fri Mar 06 21:19:18 +0000 2009
compare-object $updateJob ($addToSsp+$inSsp) | group sideindicator —-note it's comparing list of fields in$updateJob to UNION of 2 lists
Fri Mar 06 21:42:49 +0000 2009
New-WebServiceProxy - instant web service test harness.
Fri Mar 06 22:34:24 +0000 2009
Does anyone use structs in C# for value objects? Because, I don't.
Sun Mar 08 18:47:55 +0000 2009
Finally discovered what Func<T>/Action<T> are for, yes, I should know this; no, I didn't know this. Now I do.
Mon Mar 09 01:48:55 +0000 2009
Note to self: learn how to use Rhino Mocks constraints…later.
Mon Mar 09 05:42:09 +0000 2009
Ugh, Can't use WCF Service References in VS2005 without installing a never-updated CTP that just now failed install. And yes, I said VS2005.
Mon Mar 09 16:51:35 +0000 2009
In related news: how do you troubleshoot a ~misbehaving VS Web Reference? Is there a verbose mode I can try to see why it fails to map data?
Mon Mar 09 16:57:00 +0000 2009
Q: Where are some good development-related mailing lists in which I can lurk? For me mailing lists are out of sight, out of mind
Mon Mar 09 19:56:05 +0000 2009
RT @yourdon I've begun writing a 3rd edition of "Death March" as a collaborative blog. DM me with your email adr if you'd like to see it.
Tue Mar 10 01:55:55 +0000 2009
RT 2of2 @yourdon Re: "Death March" 3rd ed: emphasize I'm just *starting* it; it's not a finished draft. But you can influence its content…
Tue Mar 10 01:56:46 +0000 2009
This ugly state machine state base class MUST DIE! Rolling up sleeves; got protective eyewear, steel toed boots, lead cup. I'm prepared.
Tue Mar 10 04:05:25 +0000 2009
Interesting series of posts about high expectations set on SP admins: http://www.sharepointblogs.com/matt/
Tue Mar 10 18:41:39 +0000 2009
Someone needs to make a "Watermark Production Central Admin + SSP" branding Feature, so I can know at a glance I'm looking at a prod site
Tue Mar 10 19:26:31 +0000 2009
In other news, the presence of a trailing slash (/) in my URL bombed out a stsadm -o createsite operation. Encourages my paranoia
Tue Mar 10 19:42:15 +0000 2009
I'm using this quick PowerShell script to compile my SharePoint Search scopes on demand: http://poshcode.org/925
Tue Mar 10 20:33:41 +0000 2009
I don't think Folder rules on custom Scopes work. I assume they work on doc libs, but not my custom list. Ugh
Tue Mar 10 20:48:04 +0000 2009
1) Write down concrete next action-style tasks. Failing that, 2) break them up into tiny actions. Failing that, 3) go home. See you tomorrow
Wed Mar 11 00:12:24 +0000 2009
Had thought: "hmm, how am I going to test this? Requires a lot of mocking." Answer: duh, move it out of the class. Trying for 0 static mthds
Wed Mar 11 03:08:54 +0000 2009
Not that I'm saying static methods are totally bad, I'm saying I'm trying to do this entire little project without them. I.e. to try it out.
Wed Mar 11 03:09:44 +0000 2009
In other news, I have test code duplication, and it's painful. But, I'm not sure how best to change tests, need to look around some
Wed Mar 11 03:11:30 +0000 2009
As an added bonus of doing my tests the hard way, JP's BDD framework ( http://is.gd/m3G0 ) makes sense to me now. Well, almost :)
Wed Mar 11 03:14:08 +0000 2009
IEnumerable shouldn't hate on null values as much as it does. Live and let null
Wed Mar 11 04:43:51 +0000 2009
1of#:"Since 2001, 23 TDD studies were published…13 reported improvements…4 were inconclusive, 4 reported no discernable difference. 1…
Wed Mar 11 16:30:22 +0000 2009
2of#:"…Only one study reported a quality penalty for TDD." http://bit.ly/13F8g - SKIP the article, go straight to Hakan Erdogmus comment
Wed Mar 11 16:31:41 +0000 2009
Wed Mar 11 16:34:29 +0000 2009
1of#: For the record people: learn how to do your dayjob better first, THEN look to the shiny new GUI toolkit. If it doesn't help…
Wed Mar 11 19:21:43 +0000 2009
2of#:…doesn't help you be better in some way, than why are you learning it? Also, there's a lot of room for improvement with what we …
Wed Mar 11 19:23:05 +0000 2009
3of#:…have now. No need to wait for Azure on Silverlight + WF + WPF + jQuery to solve our problems for tomorrow; instead, learn how to…
Wed Mar 11 19:23:54 +0000 2009
4of#:…how to build web apps TODAY. People are so far behind, and then they read an article that casually says "learn WPF." LEARN WPF…
Wed Mar 11 19:24:28 +0000 2009
5of5: I'm done. Lesson: if anyone tells you to learn a framework/technology, ask them if they've learned it. Because they haven't.
Wed Mar 11 19:25:05 +0000 2009
Link that started my rant: "6 Things *EVERY* ASP.NET Developer should know by 2010" http://blog.saviantllc.com/archive/2009/03/09/4.aspx
Wed Mar 11 19:25:34 +0000 2009
RT @yourdon: I need lots of new examples, war stories, etc about today's death-march projects. If you've got one, DM me or email
Wed Mar 11 21:21:13 +0000 2009
Someone just said "Shame on you" on my "SP Wikis" post, I need to update the post body itself to be more accurate: http://bit.ly/VihV3
Wed Mar 11 21:27:59 +0000 2009
Also I'll point out I'm highly bemused by the "shame on you" comment :) He's right, but it's still a little funny, esp. the way it's worded
Wed Mar 11 21:29:48 +0000 2009
Q: How many off-hours technical learning would you say is COMMENDABLE? 4 hours a week? 2 hours? Please do reply, I'm curious. I say 4hrs
Wed Mar 11 21:48:20 +0000 2009
PHP is its own reward
Thu Mar 12 18:26:17 +0000 2009
Tomorrow's forecast: EXTRAORDINARILY PRODUCTIVE
Thu Mar 12 20:53:26 +0000 2009
Yes, I'm saying that codebehind in InfoPath forms is exactly like The One Ring: turns good intentions into GREAT EVIL
Fri Mar 13 20:47:59 +0000 2009
"Krikey,the things these artists are doing while everyone else is rewording their unit tests and staring at the TIOBE index." -http://is.gd/nfE2
Fri Mar 13 22:58:09 +0000 2009
META: when your new follower follows 10000+ people, block them; they won't miss you. Also, blatant ads. Block.
Sat Mar 14 00:14:04 +0000 2009
Whatever happened to Blossom? The TV show. Yeah, now you're remembering, that one.
Mon Mar 16 03:16:54 +0000 2009
On keeping up: http://bit.ly/5km3k - this is the #1 reason I've stopped SP-targeted learning—focus on fun! Link from @jpboodhoo
Mon Mar 16 17:59:25 +0000 2009
"SharePoint 14 to public beta in 2 or 3 months" - tweeted 26 days ago - http://twitter.com/rmaclean/statuses/1222833833
Mon Mar 16 20:31:18 +0000 2009
Neat, this is what a psake script looks like: http://is.gd/nDSD
Tue Mar 17 03:09:15 +0000 2009
Botched AnkhSVN file move => "Microsoft Visual Studio (2008) is Busy" dialog
Tue Mar 17 03:21:03 +0000 2009
Ugh, NBehave / NSpec examples (from src) are trivial=>not useful. JP's sample is scary, but is believable
Tue Mar 17 03:48:34 +0000 2009
The MachineSpecifications NUnit extensions are certainly neat: http://bit.ly/MSpecNUnitLove - also, CollectionAssert…it exists.
Tue Mar 17 04:26:57 +0000 2009
DL'ed files are "blocked" for my own safety. Downloaded "streams" from sysinternals to remove blocks en masse. Irony: streams.exe is blocked
Tue Mar 17 04:53:07 +0000 2009
Tue Mar 17 16:23:24 +0000 2009
Ok there are a lot of great PowerShell + SharePoint scripts at http://sharepointpsscripts.codeplex.com/ - common tasks, automated, easy
Tue Mar 17 22:30:42 +0000 2009
YEEAaaaaaaaaaaaaaaaaargh no-index attribute
Wed Mar 18 00:50:52 +0000 2009
~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~
Wed Mar 18 02:27:52 +0000 2009
SharePoint 14 upgrade details via MS KB articles: http://tinyurl.com/clv8ku
Wed Mar 18 13:47:50 +0000 2009
I've got to unsubscribe from dotnetkicks.com. I keep succumbing to the "someone wrong on the INTERNET" bug
Thu Mar 19 05:25:40 +0000 2009
Most recent post I "couldn't let go": http://tinyurl.com/d6vvx9
Thu Mar 19 05:27:31 +0000 2009
Fellow developers: you can BOTH a) acknowledge your dev skill shortcomings, AND b) feel adequate. SEE: http://secretgeek.net/inadequate.asp
Thu Mar 19 17:54:13 +0000 2009
Run one test =>pass. Run all tests=>same test fails. Lesson: I'm misusing the test framework
Fri Mar 20 02:45:37 +0000 2009
In related news, I'm still looking for how others do "rowtest"-style tests while adhering to the AAA convention. Examples?
Fri Mar 20 02:55:30 +0000 2009
Halfway done switching tests over to MSpec, and just read the output (which shows all specs formatted nicely). It's surprisingly readable.
Fri Mar 20 04:14:42 +0000 2009
Ok I just deleted 2 tests that were dumb. Who's the jerk that wrote them in the first place! Jerk! Oh, that's me, I wrote them, my bad.
Fri Mar 20 04:28:00 +0000 2009
#followfriday @CobraCommander - proving that everyone succumbs to the inanity of Twitter
Fri Mar 20 15:54:27 +0000 2009
Do the SHIFT key! ~ ! @ # $% ^ & * ( ) _ + ~ ! @ #$ % ^ & * ( ) _ + ~ ! @ # $% ^ & * ( ) _ + ~ ! @ #$ % ^ & * ( ) _ +
Fri Mar 20 17:50:31 +0000 2009
Finished conversion of my tests to mspec. Now to fix the ugliness that reared it's head during the conversion. "Now"=>"later"
Sat Mar 21 07:28:03 +0000 2009
Seriously considering changing my avatar to this:(http://is.gd/ou6J) - Related: http://qwitter.com isn't owned by qwitter
Mon Mar 23 03:24:50 +0000 2009
Do the UNICODE! Õ??????¦?n?????
Mon Mar 23 19:06:10 +0000 2009
New favorite word: "roughage" - http://tinyurl.com/dltl2g
Mon Mar 23 21:22:25 +0000 2009
In other news, I like Neal Ford's arguments against workflow designers: http://bit.ly/ILTZq
Mon Mar 23 21:27:40 +0000 2009
New thought: someone needs to write another twitter client…specifically, a gopher twitter client. Believe it, gopher.
Mon Mar 23 22:28:15 +0000 2009
RT @doctorlinguist: @pseale gopher://gopher.floodgap.com/1/fun/twitpher
Mon Mar 23 22:49:03 +0000 2009
Nothing encourages me to learn keyboard shortcuts more than my laptop touchpad. Tonight's find: CTRL+W, CTRL+E gives focus to Error List
Tue Mar 24 02:40:58 +0000 2009
Found my pre-LINQ code that attempted to count items in an IEnumerable. Thankfully today's me is smarter; ~5 lines replaced with .Count()
Tue Mar 24 02:52:39 +0000 2009
SharePoint's search engine can go die. Forget any nice things I've said about it in the past.
Tue Mar 24 13:41:41 +0000 2009
Or, it's my fault I sacrificed the chicken BEFORE the goat, not AFTER as clearly laid out on MSDN.
Tue Mar 24 13:42:34 +0000 2009
And by "chicken"I mean "ran full crawl" and by "goat" I mean "updated the search scopes." Also forgot to do macarena and sprinkle fairy dust
Tue Mar 24 13:43:48 +0000 2009
And by "macararena" I mean "include displaytitle AS WELL AS ows_Title in the managed property mapping."
Tue Mar 24 13:48:18 +0000 2009
Issue is resolved, I did the macarena and sacrificed a chicken, in that order. See previous tweets to see what I mean, that's the real sol.
Tue Mar 24 14:09:16 +0000 2009
jQuery eliminates crapola JavaScript. I HAVE PROOF
Wed Mar 25 00:00:54 +0000 2009
Another example of how the real-world Internet surpasses imaginations of any fictional cyberspace: http://bit.ly/tCKU - home router worms
Wed Mar 25 12:43:30 +0000 2009
SP as app dev platform: 1) do your app dev the old way, ASP.NET/SQL, but deploy to _layouts/ folder. Declare SUCCESS
Wed Mar 25 21:36:52 +0000 2009
SP as app dev platform 2): 80/20 rule, pretend remaining 20% is "impossible." no project longer than a week
Wed Mar 25 21:42:05 +0000 2009
… 3) extenuating circumstances require you do app dev in SP.
Wed Mar 25 21:45:18 +0000 2009
There is no fourth option. You're doing 1-3 or it's (in my opinion of course) a bad idea.
Wed Mar 25 21:46:39 +0000 2009
The example demonstrating spec failures from thrown exceptions is hilarious: http://bit.ly/HSLYMY - just click the link
Thu Mar 26 03:47:32 +0000 2009
Also note in MSpec that Catch.Exception( ()=>stuff() ) is the syntax. For example see: http://bit.ly/18tVh8
Thu Mar 26 03:55:18 +0000 2009
Cloud computing appreciation manifesto! http://cloudappreciationsociety.org/manifesto/ CLICK! Click it! You won't be disappointed
Thu Mar 26 18:07:33 +0000 2009
Is it just me or should I NOT feel dirty using an image submit button in HTML? http://tinyurl.com/df76nm (<input type="image"/>)
Thu Mar 26 18:12:43 +0000 2009
I call this code pattern "choosing to suppress disgust:" http://img205.imageshack.us/img205/8726/choosingtosuppressdisgu.png
Thu Mar 26 18:59:16 +0000 2009

There wasn't really a point to listing all these out. Well, no reason besides blatantly advertising twitter.com/pseale. Subscribe! Do it!

Categories: .NET
Technorati:
Friday, March 27, 2009 4:48:49 AM UTC       |  Comments [0]  |  Trackback
Thursday, February 19, 2009 5:12:52 AM UTC

Or, how two unlike things can seem alike!

A while back, I followed a fascinating link from programming.reddit titled Pablo Picasso's version of refactoring: Reducing a drawing to 12 perfect pen strokes.

As the story goes, Pablo Picasso created a series of eleven lithographs of a bull in profile. He first created a detailed, accurate image of a bull. Then, for his next lithograph (I don't know what a lithograph is either, let's just pretend these are drawings from now on) he changed some aspects of the bull, accentuating its bull-ness. As he progressed, he began to remove detail, slowly replacing photorealism with smaller expressions of the same aspect, retaining the bull-ness. His last drawing was twelve or so thin strokes, a stick figure still roughly recognizable as a bull.

As the programming.reddit title indicated, this sounds a whole lot like refactoring!

It's super impressive, and I dearly urge you to look at the progression of Picasso lithographs yourself (click link below):

Now for the dangerous part.

.

.

.

.

.

.

.

.

.

Extra space added so you follow the link before viewing the section below; you'll miss out on the full experience otherwise!

.

.

.

.

.

.

So you're with me, right?

.

.

.

.

.

This is something with which I want to leave you. The next time someone makes a bad analogy, nail them with this Descartes quote. I can't pronounce Descartes properly, but that won't stop me, and it shouldn't stop you either. If in doubt, try a "dude, the French philosopher dude," sprinkle the word "dude" anywhere you're uncertain; they serve as TODOs for your vocabulary.

Aside: in true reddit fashion, this is the next highly-rated comment thread:

…and following that, unintentional, then intentional, references to realultimatepower.net.

### Linking this discussion to the present day

This misuse of seeming similarity is (among other reasons) why a lot of us are bugged with recent CodingHorror posts. Specifically, let's take list a):

List A: SOLID principles et al

Here's list b), in The Ferengi Programmer (emphasis added):

List B: 285 Ferengi Rules of Acquisition

The Ferengi are a part of the Star Trek universe, primarily in Deep Space Nine. They're a race of ultra-capitalists whose every business transaction is governed by the 285 Rules of Acquisition. There's a rule for every possible business situation—and, inevitably, an interpretation of those rules that gives the Ferengi license to cheat, steal, and bend the truth to suit their needs.

And in case that was a coincidence, here's the list from his next post, responding to the standard rebuttal:

List C: processes and methodologies

So the question to you: are these three lists the same?

### I win either way

My logic is inescapable. If you think the SOLID principles (list A) are in fact, as sneaky and extensive as the Ferengi Rules of Acquisition (list B), and are just the newest in a long line of fad methodologies (list C), then hey: I'll point you to the story about the bull, and how we all thought it was similar to refactoring. Except when you think about it, it's wasn't refactoring, it only resembled refactoring on the surface. I mean, come on, he drew pictures of a bull, it wasn't refactoring. I dare you to say the Picasso bull lithograph series was like refactoring.

And there I have you as well! Because if you refute my drawing-a-bull-isn't-like-refactoring argument, then by the very nature of your disagreement that "these two things aren't alike," you're proving that "these two things aren't alike!" Refute my "bull-metaphor doesn't apply to refactoring" argument to the "Ferengi rules metaphor doesn't apply to the SOLID principles" argument, and you've proven the very thing you're trying to argue against! I have you either way!

Next time I see you I'll collect the five dollars you owe me. And before you say to yourself "but I don't owe Peter $5," remember, my logic is irrefutable and you owe me a fiver*. Descartes says so. THE BULL! Pay up. *this is a real word, people use it Categories: .NET | Awesomeness Technorati: | Thursday, February 19, 2009 5:12:52 AM UTC | Comments [1] | Trackback Sunday, February 01, 2009 10:32:01 PM UTC This deserves its own post. After declaring that I won't be writing any iPhone apps, despite my secret dreams of iPhone app fame and riches, I went back and looked for the source of these secret, repressed dreams. Where did I get the idea that there's an iPhone app gold rush? ### iPhone app gold rush stories I didn't write the titles; the following links are as they appeared to me on either programming.reddit or Hacker News. Click each [comments] link if you're interested. Categories: .NET Technorati: Sunday, February 01, 2009 10:32:01 PM UTC | Comments [0] | Trackback Wednesday, January 28, 2009 8:27:42 AM UTC I won't recap 2008; I dislike public introspection and what's more, you can read all about my 2008 by visiting my blog's home page, which has everything. I think the home page weighs in at 5MB of content right now. It's huge, and unashamed of its hugeness—my blog wears a T-shirt that says "large and in charge." The T-shirt has prominent pizza stains. Deal with it. It's already late January, and I've missed the new year's deadline, but I'm still roughly in time for the Chinese new year. ### New year's resolutions ahoy! Programming-related aggregators: your new hobby! One thing dramatically missing from my 2008 was a proper book education. I read every programming-related aggregator known to mankind, listened to every programming podcast known to mankind, and read my share of technical weblogs. But I can't say I read programming books. Books! I shouldn't have to explain why books are uniquely and deeply beneficial to any education. …So I won't. #### Resolution: read 6 "fundamentals" books this year 6 is the reach goal, because for me, reading dense textbooks is tough. I used to put myself to sleep reading history textbooks. It turns out, Object Thinking by David West works just as well as a history textbook—even though (in both cases!) I'm interested in the subject at hand, focusing is tough. Of the six, I'm going to start with JP's short list focused on coding fundamentals—not necessarily design, estimation, DDD, business analysis, project management, management, or whatever other useful fundamental skill you can imagine. Coding, not that other thing. ##### Sub-resolution: read 3 technology-focused books this year No specifics here because I don't know which three; I'll know when I need them. I'm just writing this as an acknowledgement that yes, at some point in the next year I'll have to tackle some new frameworks; this space is reserved for three such Unnamed Frameworks. Okay, so, books. That's obvious. #### Resolution: read source code Another obvious (and easy!) candidate is reading others' source code. Scott Hanselman has covered the why's of this topic well; I'm just here to say "me too." What's unfortunate is that I'm already running out of good samples. Most of the ASP.NET MVC samples don't even cover all the CRUD operations! CRUD! At this one I'm doing well. So, stay the course! #### Resolution: complete and release one minor development project this year Next: practice. This is easy to describe. If I want to become a strong developer, I need to practice. Others have done a good job explaining why; I'll just say that I plan to do this. And not just attending coding dojos, which are great, but actually doing some self-directed practice. "Practice" isn't a specific goal, so instead, we'll work at one minor development project. Minor means that it doesn't have to change the world or make me a billion jiggawatt dollars. I'm also going to try to stop reading all the rags-to-riches-iPhone-app stories that appear regularly, seducing me with their plausibility. There's been a lot of those recently (story#1 story#2 story#3 story#4 story#5 story#6). Anyway, the point is—make a project, finish it, and do so in such a way that I'm not ashamed to release the source code. No ulterior motives, like releasing it later as an iPhone app. But if I were to release an iPhone app—I have a dream where Steve Jobs shows up on my doorstep holding a duffel bag full of cash. He's there making his daily delivery of my iPhone app's earnings. In my dream Steve Wozniak is there too, giving me a thumbs up and another duffel bag full of cash. Woz doesn't work for Apple anymore; he's there because my iPhone app is that good. Anyway, no iPhone apps. As a way to practice, make one project; practice techniques while making the app; no ulterior motives. Sounds easy enough. I should clarify that I can't count work projects, no matter how proud of them I may or may not be. #### Resolution: boycott more Microsoft frameworks While boycott is a strong word, it may not be strong enough to express how overwhelmed I am by the tide of technologies and frameworks coming from Microsoft! Also, it's a proven strategy—by boycotting Workflow 3.0 and LINQ to SQL in 2008, I saved a bunch of time not learning these deprecated frameworks. I'm sticking with this general strategy for 2009: if I don't need a technology, I won't pressure myself to learn it. ### Putting all this in perspective These are my technical learning goals for the year. Let me state that by no means is this my life priority for 2009. I think it would be awesome to reflect on 2009 and say "this was a great year," despite woefully failing to meet any of my stated goals above. The point being, there are more important things than arguing about whether Silverlight matters. You know, life! Oh and, quick, shot-across-the-bow answer: no, Silverlight still doesn't matter; don't learn it yet. ### Final note: if your goal is continuous improvement, ask yourself why? Something I noticed at the KAIZENC0NF was that there were exclusively enterprise development-related sessions (and I'm culpable as I could have suggested a topic Friday had I been there Friday). This didn't bother me at the time, but as I look back on the conference, it bothers me now. I think it's because I don't want to be truly great at enterprise development. Sure, I'm driven by a desire to be good at what I do. Sure, I want to remain gainfully employed, ideally such that I'm more valuable, rather than less, as time passes. This is all reasonable, and yes, I will put in the requisite effort. I.e., this means I'll spend time learning things I have no interest in learning, i.e. I'll work at it. The key word there being work. But I'm not passionate about (name your enterprise vertical). I don't get excited learning a technology, framework or skill if I can only use it at work. And don't think I just mean SharePoint (unpopular amongst .NET developers, an easy target); this applies also to the enterprise development aspects of DDD and Lean (popular, and on an upward arc*), and in learning enterprisey things like data warehouses. Or BPEL, or the abstract concepts behind BPEL. Yawn. *the key here is to note that yes, I believe they're valuable, but no, I can't seem to get excited about learning them. Don't overreact, I just mean "I can't get excited about learning them." What's the point of trying to become truly great at enterprise development? Just enterprise development? Categories: .NET Technorati: Wednesday, January 28, 2009 8:27:42 AM UTC | Comments [0] | Trackback Thursday, December 18, 2008 9:48:39 PM UTC It's all business again. ### First: I'm going to be away from (hopefully all) computers for a while I'll be on vacation, in a very real and non-metaphorical sense. Which is awesome. What it means for you is, if on the offhand chance you leave a comment or email me, I won't respond. Sorry, I'll be away. Sucks for you, awesome for me. ### Second: ASP.NET MVC and SharePoint, together as one It still boggles the mind how many people are searching for this term. The only thing I want to ask you, the many people who are searching for "how do I get ASP.NET MVC running underneath SharePoint," is: why? Why do you want to do this? I'm not here to provide any answers today; instead, I'm posting this as a kind of googlebait to lure you in. Maybe it's wrong, but whatever. Why do you want to use the still-beta ASP.NET MVC framework on top of SharePoint? I honestly don't know why anyone would do this. So please, if you search for "how do I combine SharePoint with ASP.NET MVC," and you hit this page, leave me a comment! I want to get in your brain and swim around a little. I'm pretty sure I can get ASP.NET MVC running underneath SharePoint; the only magic will be removing pieces of SharePoint from the MVC project's web.config, and (maybe) integrating with SharePoint's security (or maybe not). Besides ugprading to .NET 3.5 SP1, which is by far the most arduous step on a production farm, it shouldn't be too tricky to get this working. Anyway, there's a somewhat rambling teaser—it's probably possible, even if I can't imagine why it would be a good idea. Categories: SharePoint Technorati: Thursday, December 18, 2008 9:48:39 PM UTC | Comments [8] | Trackback Thursday, November 06, 2008 8:16:47 PM UTC This isn't my building, but you get the idea. Like my building, the elevators line both sides of a short hallway. I had a moment of sudden disorientation during an elevator ride recently. First, let me explain the elevator setup. In our fancy downtown building, we have a bank of five (or is it six?) elevators. Our elevator bank is housed in the center of the building, lining both sides of a short hallway. As fancy as we are, we aren't fancy enough to justify glass windows or any of the other elevator luxuries. The doors open, you get in, the doors close, and your new, smaller world is the four brushed-metal elevator walls. So, as the scene had played out hundreds (or possibly thousands) of times before, the doors opened, I got on the elevator, the doors closed. This time, however, I was distracted—more so than usual—and wasn't paying too much attention to where I was walking. As the elevator began its descent to the ground floor, and as is quite unusual for me, I had a new thought intrude—which elevator am I on? And which way do I turn when the door opens—left or right? I had no idea. And for a brief moment, I was suddenly disoriented—almost in a physical sense. We'll get back to the elevator story in a moment. ### Conference that shall not be named so that keyword searches shall not pick it up Last weekend I attended the open spaces event in Austin, and while I'd like to post something saying "it was a great time, well worth it, etc," I can't. There were only two impressions I have after attending the conference. One: I'm not ready. I'm not even currently using the tools discussed by (and at times, designed by) the other attendees, nor (with my current technology stack) am I planning to use them. Tools aren't everything; my "I'm not ready" feeling also goes for the softer topics like lean/agile/kanban, which are definitely of interest to me, but not in the sense that I have any authority to make changes outside of myself. I'm not a "Big Tymer" like Manny Fresh and Baby. Before we move onto the second impression, let me talk for a second about my learning queue, by way of Billy Hollis. ### Learning queue I listened to a fascinating Deep Fried Bytes podcast interviewing Billy Hollis. Most interesting to me was his discussion of how no one is keeping up with the .NET framework—while Microsoft is now pushing Azure and Windows 7 and C# 4.0 and whoops, throw out the old Workflow Foundation, we're pressing the reset button on Workflow 4.0—while all this is happening, of the developers Billy Hollis interviews, only ~1 out of 10 are using generics. Generics, which were introduced in 2005, and as Billy Hollis pointed out, not a large topic to learn, are still not in regular use by 9 our of 10 developers. Sample bias noted, even if the developers he interviews aren't representative of the developer population, this is still something to sit up and take note. The key takeaway is that almost everyone is far behind. And he illustrates this with some stark (if anecdotal) numbers. Meanwhile, over the last several years I've focused on SharePoint. I've been learning about web parts and workflow and InfoPath and web content management publishing features and ASP.NET app pools and IIS6 and XSL and Solution packages and Feature packages and governance and taxonomies and IA and so on—I've immersed myself in the SharePoint world. It was tough to keep up, especially given the magnitude of SharePoint itself. But, at some point in the past, I publicly and officially declared, "I'm done." No more SharePoint learning, except what I need for my job, today. And it's really freed me up, in terms of mental weight. Now that I know I no longer need to learn how to do SharePoint workflow, for example, why would I ever want to learn it—especially now as they've announced WF4.0 will be completely new? Why would I want to research SharePoint object disposal best practices, when I myself no longer need this to get things done at work? But something else happened, something unintentional. At the moment I declared I was no longer going to learn SharePoint—at that moment I experienced a similar moment of disorientation. If I'm not going to be a SharePoint guy in the long term, what now? The elevator doors will open soon; left or right? ### Back to the conference And we're back to talking about the open spaces conference I just attended. This was the conference where I was to meet up with what would become my new community of practice. This would be the group with which I could identify. But for whatever reason, it didn't work out that way. I've already mentioned that at the conference, I got the strong impression that I wasn't ready to attend; that I needed to do some homework before even being able to process most of what was discussed in the sessions, much less contribute. Surprisingly, at this conference I also had a strong moment of disorientation again. Instead of cementing my understanding of software development into a rigid cast, and allowing me to fall into something of a comfortable pattern as I expected, I felt distinctly less comfortable afterwards. I don't think it's necessarily a bad thing to be uncomfortable. If we're following the elevator story from earlier, a dubious metaphor to begin with, but hey, here we are at the end and we can't exactly go back and invent a new and possibly worse metaphor—well, let's stick with the elevator story. At the open spaces conference last weekend I experienced a kind of career vertigo—I'm in the moment just before the elevator doors opens. It's uncomfortable, but I'm sure the sensation will pass. And when it does, my world will have grown. Categories: .NET Technorati: Thursday, November 06, 2008 8:16:47 PM UTC | Comments [1] | Trackback Tuesday, October 28, 2008 11:46:47 PM UTC I couldn't hold out. It's okay though, because today we're strictly business. In the course of developing a bunch of SharePoint timer jobs recently, I've learned several things, most of which aren't obvious from the get-go: 1. Storing and retrieving configuration data is a problem. Because I don't have a farm-wide configuration list (yet—the temptation grows every day), I was forced to do some ugliness in order to store and retrieve configuration data. I don't necessarily recommend my approach; instead I'll just say I'm using a custom SPPersistedObject as my Timer Job's config store and I'll further say that it works, roughly, though I'd now prefer a better way. Consider setting up a farm-wide config list, it's a relatively big time investment but is probably worth it. Other traditional config storage options (such as the web.config, or a site-local config list, or your site-local property bag) aren't accessible to Timer Jobs without some sort of…configuration's configuration…hmm, yes…without something extra pointing the way. Anyway, it's a problem. 2. ANYTIME YOU UPDATE YOUR TIMER JOB CLASS (or custom assembly), YOU MUST BUMP ALL SHAREPOINT TIMER SERVICES ON THE FARM! I learned this lesson the hard way. If you fail to bump the timer service, it will blissfully run the old copy of your timer job class; I don't know how or why it caches your assembly, but it does, and bumping the SharePoint timer service is the only way to clear the assembly cache(?) and force it to use your minty fresh assembly. 1. Side-note: can we report this as a bug in the Solution framework? Because this is a big enough gotcha that the Solution framework needs to include an option to -bumptimersvc …or something. Maybe a custom stsadm command (stsadm -o resetadmsvc) or maybe tack something onto the Solution deployment API and associated stsadm commands…we need something. 3. There are differences between the context of a test harness (i.e. something like an NUnit integration test running under the NUnit test runner) and a timer job running in the Timer service. This may sound obvious, but when you're troubleshooting something that "only breaks on the test farm," this little bit of trivia is important. If you need to troubleshoot your timer job as it runs on the timer service, specifically, this can get tricky. Also, in the course of troubleshooting just such an issue (as outlined in #3) I've created a little script to speed up the code-compile-test loop; instead of scheduling a timer job for "0100 hours" and waiting until tomorrow to see the results, why not reschedule the timer job by your own self? And that's exactly what I did. My script below will reschedule your timer job to run 10 seconds in the future (read: instantaneously). All you have to do to get this script working for you is customize four variables to match your own timer job, then follow the quick "usage instructions" at the bottom of the script. Below is a rundown of the four variables: •$siteUrl - the site collection root URL. We need this to get a reference to the SPWebApplication that holds your Timer Job.
• $customAssemblyName - the partial name of your custom assembly. This is necessary because we're going to new up an instance of your timer job, and thus we'll need to first load the containing assembly. •$jobName - this need only be a rough equivalent of your job name. I'm usually lazy and say something like "*custom profile job*" or the minimum necessary to identify my job from all the rest. Messy is good; we're running a one-off script, right? Or you can go ahead and type the perfect exact case-sensitive job name in there, that's fine too.
• $timerJobClassName - again, we need this because we're going to new up a rescheduled timer job. Assumptions: • Your original schedule doesn't matter, and may be destroyed. Because that's exactly what this script does, by the way—it destroys the original schedule and sets a "10 seconds from now" schedule. Incidentally, whatever it did before, your job now runs on a daily schedule :) • You're only concerned about the timer job running on one web application's context. Because in truth that's all that mattered to me when I wrote this script, I didn't consider the possibility of multiple jobs. • You're running a single-server farm (i.e. a developer VM). My script only stops the service on the local server. • No one else cares if you bump the SPTimerV3 service, including any other timer jobs that may be running presently. Note in the script below, PowerShell has some cmdlets to work with Windows Services. I was totally unaware of them until I had to bump this service; neat. While these assumptions sound scary, trust me—you won't care. On a single developer VM, you won't care about all these things. Even on a multi-server test farm, you won't care—because this script is going to save you hours of troubleshooting. The PowerShell script is as follows:$siteUrl = "http://dev"
$customAssemblyName = "Corp.SharePoint.Assembly"$jobName = "*your job name*wildcards*work*"
$timerJobClassName = "Corp.SharePoint.Namespace.TimerJob" [void][reflection.assembly]::LoadWithPartialName("Microsoft.SharePoint") [void][reflection.assembly]::LoadwithPartialName("Microsoft.Office.Server") [void][reflection.assembly]::LoadwithPartialName($customAssemblyName)

function Run-Init
{
$global:s = [Microsoft.SharePoint.SPSite]$siteUrl
$global:webApplication =$s.WebApplication
$global:job =$webApplication.JobDefinitions | ? { $_.Name -like$jobName }
}

function Create-NewJob
{
Stop-Service "SPTimerV3"
Start-Service "SPTimerV3"
$global:job.Delete()$global:job = new-object $timerJobClassName -arg$webApplication
$sched = new-object Microsoft.SharePoint.SPDailySchedule$now = [datetime]::now.AddSeconds(10)
$sched.BeginHour =$now.Hour
$sched.EndHour =$now.Hour
$sched.BeginMinute =$now.Minute
$sched.EndMinute =$now.Minute
$sched.beginsecond =$now.Second
$sched.endsecond =$now.Second
$global:job.Schedule =$sched
$global:job.Update() } #Usage: paste this script directly into a PowerShell console; the quickest #way is to right-mouse-button click. Then when you're ready, #run the following commands (minus the # of course): # #Run-Init #Create-NewJob # #Anytime you update your custom assembly "Corp.SharePoint.Assembly", you will need to #DESTROY your open PowerShell console/session and create a new one. This is the cleanest way #to unload your old custom assembly. That's pretty much it. Change the variables to whatever you need, open a PowerShell console, right-click, then type "Run-Init; Create-NewJob". You're done! Step 3: Profit! Tiny footnote: if you don't care about "context", this script also allows you to execute the timer job immediately. First run the "Run-Init" function, then just type$job.Execute([guid]::Empty) in PowerShell. You can also attach to the PowerShell.exe process and do "remote debugging" of your timer job, if desired. Though if you're going to go that far, you should probably just write an NUnit test that performs the same task, and debug THAT. I'm very pro-unit testing frameworks, really, they're great. Anything that closes the code-compile-test loop, in any way, is a good thing.

Tuesday, October 28, 2008 11:46:47 PM UTC       |  Comments [3]  |  Trackback
Tuesday, October 28, 2008 1:52:14 AM UTC

Yes, I'm aware it's late in the year 2008, I'm aware this stuff isn't as fresh as WPF 3D or Ruby Processing.

As I've posted earlier, I've accrued some treasured junk. Now that I have all this junk, what am I to do? Well, um…I didn't really know either.

So I started messing around.

### Messing around with System.Drawing: first, infrastructure

The first thing I did was to determine the average color for a single image. I'm not sure exactly where I'm going, but I figure, hey, if you want to get a rough "picture" of what an image looks like, it's not a bad idea to look at the average color value. And we're using the RGB breakdown for color, meaning white is #FFFFFF (256,256,256), black is #000000 (0,0,0), and everything else falls in between.

Note that in my case, performance is not a big deal; I'm doing all these calculations one pixel at a time which, as you might image, is suboptimal. Mostly a straightforward operation:

public static Color Average(Image image)
{
using (Bitmap bitmap = new Bitmap(image))
{
int red, green, blue;
long redRunningSum = 0, greenRunningSum = 0, blueRunningSum = 0;
long numPixels = bitmap.Width * bitmap.Height;

foreach (Color pixelColor in ImageHelper.GetPixelsFor(bitmap))
{
redRunningSum += pixelColor.R;
blueRunningSum += pixelColor.B;
greenRunningSum += pixelColor.G;
}

red = (int)(redRunningSum / numPixels);
green = (int)(greenRunningSum / numPixels);
blue = (int)(blueRunningSum / numPixels);

return Color.FromArgb(red, green, blue);
}
}

Ok, so why do we care—it's a function, right? Well, okay, yes—but here's a PowerShell function you may also find interesting:

function Average-Images ($filenames) { [void][reflection.assembly]::Loadfile("C:\a\sandbox\ImgTest\bin\Debug\ImgTest.dll")$i = 1
$total =$filenames.count
$results = @() foreach ($filename in $filenames) { write-host "$i - $($i*100/$total)%-$($filename)"$i++
$img = [System.Drawing.Image]::FromFile($filename)
$o = new-object PSObject$avg = [ImgTest.ImageHelper]::Average($img) add-member -inp$o -membertype "NoteProperty" -name "Filename" -value $filename add-member -inp$o -membertype "NoteProperty" -name "Image" -value $img add-member -inp$o -membertype "NoteProperty" -name "Red" -value $avg.R add-member -inp$o -membertype "NoteProperty" -name "Green" -value $avg.G add-member -inp$o -membertype "NoteProperty" -name "Blue" -value $avg.B$results += $o }$results
}

So. This is getting interesting. What the "Average-Images" function above does is create a custom object with some useful properties: we've got the original filename, we've got a still-breathing reference to the System.Drawing.Image object, and we're storing the "average pixel's" red, green, blue values as individual properties. The resulting objects look something like this:

Maybe it's still not interesting for you. That's fine, 'cause this party's* just getting started!
*despite what I've just written, this is not a party

I have one more piece of "infrastructure" to explain, before we can get cooking: I've created a PowerShell function called "Make-Html," which creates a permanent HTML file listing all the images I want to see, in the order I want to see them. As an added bonus, the function immediately launches the newly-created file in my browser. Here's the code:

$startDir = "C:\a\ps1\scrape\" function Make-Html ($fullfilenames, $resultingFilename) {$files = $fullFilenames | % {$_.split("\")[-1] }
$tags =$files | % { "<div style=""float:left;""><img src=""$_""/></div>" }$html = @"
<html><head><title>$($resultingFilename)</title>
$($tags)
</body></html>
"@

$html > "$($startDir)$($resultingFilename).html" ii "$($startDir)$($resultingFilename).html" } Ok, I know, we're still not doing anything. ### Let's warm up Okay, as I say to everyone, the real power of PowerShell is its object piping. PowerShell pipes objects, not text; this is something best seen, not heard, and hopefully we'll see a little something today. The objects we'll be slinging through the pipeline today are, as mentioned above, custom objects that have a Filename, an Image, and the RGB values representing the image's average (mean?) color. So, let's count how many items we have: Awesome. Let's count how many items we have that are more red than any other color: Hmm, that was unexpected, 359 red-dominant images out of 503, that's proportionally huge. I'll point out that I did some extra fanciness to get this count to evaluate on one line, but usually (i.e. when I'm not posting to my blog) I'll work my way in parts, not all at once. So the same thing, split out, would be: That's more realistic. Okay, one more thing before we go. Finding out most of my pictures are red-dominant has me wondering: what about the other two? Let's work with the objects a little* to massage the answer out of them: *a lot; ugly function that pulls out the dominant color not shown Weird. ### Skipping ahead to the end This is the pattern: we'll ask a burning question, we'll form this question as a PowerShell pipeline, and we'll see the results. Question: can we see the images in order of "redness"? Pipeline:$a | sort red | % { $_.filename } Results: Least red: Most red: Summary: okay, that makes sense. We used a naive algorithm that simply counted the red value, meaning that a pure black image or a pure blue image would have the "least redness" and a pure white image would have as much "redness" as a pure red image. Hmm, we can fix this. Onwards! Question: Okay, so we're looking for redness. Let's call this proportional redness. Hmm, here we go: Pipeline:$relativelyRed = $a | select filename, @{Name="redness"; expression={$_.red / ($_.red+$_.green+$_.blue) }}$relativelyRed | sort redness | % { $_.filename } Results: Least red: Most red: Summary: now that's more like it. Our earlier naive results were instructive, but this is more what I was looking for. Question: okay, so let's stop messing with redness. Instead, let's find out what images have the most variance between the colors. We're less interested in the white-gray-gray-gray-black spectrum, and are looking for more colorful images. Let's do this: Pipeline:$variance = $a | select filename, @{Name="Variance"; Expression={$avg = ($_.red+$_.green+$_.blue)/3;$var = [math]::Abs($_.red-$avg) + [math]::abs($_.green-$avg)+ [math]::abs($_.blue-$avg); $var} } make-html -fullfilenames ($variance | sort variance | % { \$_.filename }) -resultingFilename "variance"

Results:

Most balanced:

Most variance:

Summary: most interesting, besides a grouping of the "grayish" and "black and white" images all together, is the smattering of images that have color, but are so perfectly balanced they're nestled right in there with the pure black-and-white images. Neat.

### Final bits

This post is already too long. There's not too much else to say, besides a) stuff is awesome, and b) with the aid of either PowerShell functions or .NET library calls, you can do some complex things. If you only remember one thing from this post, try and pick up the impression I'm trying to leave. This is how I see PowerShell: it's an experimental playground where I morph a thought, an idea, slowly into something workable, and in each step along the way, I'm getting feedback and refining, and in the end, I've satisfied my curiousity. Maybe it's something as useless as basic image analysis using System.Drawing.

Incidentally, if you want to see how the professionals do this kind of thing, check out Multicolr - an color search engine indexing 10 million Flickr pictures, which makes the stuff I did above kind of pitiful looking :) When I checked last, the Multicolr site was slow, otherwise it's neat; check it out.

Categories: Awesomeness | PowerShell
Technorati:  |
Tuesday, October 28, 2008 1:52:14 AM UTC       |  Comments [1]  |  Trackback
Tuesday, October 21, 2008 11:58:07 AM UTC

HOWDY!

This is a quick announcement to let you know that my site now runs on ASP.NET MVC. A few things have been updated:

### dasBlog 2.2 running on ASP.NET MVC Beta

First thing I should point out is that I'm running under IIS7 Integrated mode on shared hosting. If you're attempting this, be sure you're running on IIS 7 in Integrated mode. If you're trying to test this out on your own machine, this means you must be running Vista or Server 2008, must create a fresh web site in IIS and make sure the app pool is running on Integrated mode. Let me be clear: you can't properly test on the Cassini web server running your Visual Studio project.

ALSO: Now that ASP.NET MVC Beta adds itself to the GAC, for a while (until you host loads the dll's to its server's GAC), you'll have to make local copies of each ASP.NET MVC dll. "Copy Local" a property under each assembly Reference—set it to True for each one of them.

Ok. There are three things you have to do to get dasBlog working underneath an ASP.NET MVC app.

First, in your MVC app, set Routing to ignore your blog's folder. Mine is called "/blog". Here's what it looks like (I think I stole this from a Phil Haack blog post, so if it looks familiar, it is):

Second, and this won't be an option for all of you—I removed all the System.Web.Extensions (AKA ASP.NET AJAX, AKA "Atlas") from my root MVC app. This fixed the problem I was experiencing with my ASMX-powered RSS feed, which Atlas usurps by default (thanks Ben for the tip, that did the trick).

Third, we need to do some heavy work on the dasBlog web.config. First I'll say, thanks to Paulb on the dasBlog team for providing the starter IIS7 web.config. Big ups to changeset 14700.

Instead of attempting to explain in detail any of the nasty things I've done to make the dasBlog 2.2 web.config work under the most recent MVC drop, I'll just post my web.config directly for viewing. I don't recommend what I've done for others; instead I'll say that I got my web.config minimally working underneath a small MVC-based site.

If you're reading this blog post because no one else has provided a better explanation, then maybe perusing my web.config will help.

Without further ado, I present you: web.config of my dasBlog application running underneath an ASP.NET MVC site.

KNOWN ISSUE: http://www.pseale.com/blog (without the trailing slash) bombs out with an error. http://www.pseale.com/blog/ works fine. I assume it has something to do with the blowery.web component, something that I have no desire to fix; I'll work around the problem with a Routing fix/hack. Anyway, lesson learned: BEWARE TRAILING SLASHES!

UPDATE: apparently the blowery.web compression is somehow interfering with delivery of my CSS files. I say apparently because I didn't attempt to troubleshoot this, I just disabled the HttpModule…with extreme prejudice! As much extreme prejudice as one can muster against an HttpModule, anyway.
Categories: ASP.NET MVC | Awesomeness
Technorati:  |
Tuesday, October 21, 2008 11:58:07 AM UTC       |  Comments [0]  |  Trackback
Tuesday, October 14, 2008 3:31:12 AM UTC

Recently I've been trawling THE INTERNET for retina-dissolving or otherwise awesome images, and have programmatically collected/mushed them into the nuttiness above. More to follow soon, unless I'm lazy. So, uh, probably more to follow…eventually.

EDIT: reposted with an image that is NOT 3MB. Yes, the original image was 3MB, a catastrophically large file. I'm like the guy who sends a holiday greeting PowerPoint over email that brings down the mail server for two days. Thankfully no one subscribes to my blog, otherwise that could have created "heap big bandwidth bill." I blame Windows Live Writer and Paint.NET, daring me to paste directly from one program to the other. For shame, Paint.NET. For shame.

Categories: Awesomeness
Technorati:
Tuesday, October 14, 2008 3:31:12 AM UTC       |  Comments [0]  |  Trackback
Tuesday, September 23, 2008 3:10:01 AM UTC

### Summary first, because I just re-read this and it's ridiculous long

If you've subscribed to my blog exclusively for the SharePoint bits, sorry; now is the time to unsubscribe.

That goes for both of you/half my subscriber base :)

I've become increasingly frustrated with SharePoint (I'll explain why in, um, detailed fashion), and instead of souring like a lemon, I'm just going to refocus. This means I'm dropping out of the SharePoint blogosphere to focus on long-term fundamentals; knowledge that won't expire when SharePoint 14 hits beta. I may jump back on for SharePoint 14, but it all depends.

UPDATED 2008-10-18: fixed grammatical and factual errors. By factual errors, I mean that time when I attributed something to Chris O'Brien's blog, when oopsies! it wasn't him at all.

### Introduction

I've become increasingly frustrated with SharePoint recently (again, I know), but what's been bugging me more lately is the fact that I'm solving all the wrong technical challenges. I think I have a (deeply suppressed) aesthetic sense, that every so often, rears its ugly head. Or flares out from my neck like a humongous goiter. Goiters are a serious problem—hey!—used iodized salt, it's that easy.

I get agitated at times staring at (undocumented) CAML, or anytime I work with InfoPath, or remembering when to surround hardcoded GUIDs with curly braces (and when not), or knowing when to dispose SharePoint-created objects, or remembering which Workflow activities only work in SharePoint Designer and not Visual Studio, or working around concurrency bugs, or troubleshooting failed Solution deployments. And so on, pick your topic; these are real issues by the way.

### So what's the point of this post again?

I want to send the message out there to everyone working with SharePoint, to look beyond your immediate technical challenge, and think. Think beyond your immediate issue, beyond with what you're wrestling at this particular moment, beyond your immediate (sometimes overwhelming) technical challenge with me. Are you ready? Looking at the size of this post, maybe not!

Let's DO THIS.

### Are you an expert at SharePoint development?

I recently watched Dave Thomas give a talk about developing expertise. It's interesting, go watch it if you have time; he's not making it all up on the spot, he relies on some kind of research. In the talk he does something I like: he categorizes everyone (using the Dreyfus model) somewhere along the expertise scale: Novice, Advanced Beginner, Competent, Proficient, Expert. And gives definitions for all five categories, and gives helpful tips for how to work with people of varying skills. It's not all fluff, go watch it.

Bringing this back to SharePoint: after trolling the entire SharePoint blogosphere for quite a while now, I've come to the conclusion that there are no expert SharePoint developers. I'd go as far as to say that there are very few proficient SharePoint developers, and you probably know all their names (hint: look for "MVP" in title and/or multiple Bentleys in the parking lot).

The rest of us are just struggling. The "everyone's struggling" theme really sunk in recently when I browsed the source code for SharePoint-tagged CodePlex projects. All of them (save one) disposed their objects improperly. Including mine, to be fair, I'm not hating on your free, labor of love project, I'm making a point. The point isn't to hate on your baby, the point is to say that no one is doing SharePoint development properly. Look on CodePlex; you'll see what I mean.

Let that sink in. No one is doing SharePoint development properly. And hey, your team may have tackled all the object disposal issues, but maybe instead, you're ignoring large portions of the framework and (unknowingly) rewriting large portions of it. Maybe you don't bother packing things in Features and Solutions; maybe you spend too much time packing things in Features and Solutions. Maybe you're great at writing Site Definition CAML but haven't gotten the memo that putting everything in your site definition is a bad practice (or the other memo telling you that custom Site Definitions don't upgrade to v14). Maybe you're unknowingly triggering framework bugs that ruin your customers' trust in your solutions. Maybe you write unmaintainable code.

Maybe you don't understand your customer's core problem!

Maybe, in your metaphorical dwarven greed, you've delved too deep into the framework, and unknowingly stirred the framework Balrog. And you don't want to wake the framework Balrog, let me tell you.

I don't know about you, but I'm there—I'm struggling.

### Takeaway: just try to be competent

Dave Thomas says in his expertise talk that advanced beginners think they're much further along than they are, and those who are proficient or experts think they're beginners. Let me tell you: as far as SharePoint development skills go, I'm an advanced beginner, bordering on competent. And I mean it—knowing that he says "beginners tend to think they're experts, and experts think they're worse than they are"—with all that said and digested, I still think I'm a beginner (advanced beginner, I'm not a total scrub :) ).

So, and this applies to those of you who are still new to the SharePoint world and/or isolated—it's okay to admit that you're not an expert. If you need an ego boost, go look at CodePlex—you'll certainly learn a great many things from others' code, but you'll also realize that hey, these people don't have it together either!

### Becoming a competent SharePoint developer

What I've also begun to realize, is that the more I learn about SharePoint, the less I want to apply it to different scenarios. Where before I used to say "SharePoint Workflow is supposed to be good at solving that problem," now I say "SharePoint Workflow is painful, and for long-term maintainability I'd recommend you manually code up whatever you need, elsewhere." Whereas I used to say "InfoPath is useful for simpler scenarios," now I further limit InfoPath to "end-user tool only; don't you dare use InfoPath's code-behind." I say these with specific scenarios in mind, I have biases, your mileage may vary, all things within reason, etc, but the point I'm making stands—I'm recommending SharePoint less.

Dave Thomas says, again, that as an advanced beginner you believe that nothing works. Unfortunately, every time I encounter a new piece of the SharePoint framework, I'm an advanced beginner. I don't have the opportunity of being the CQWP guy, every day. Today I'm that guy, tomorrow I'm the InfoPath guy, day after I'm the admin, day after I'm the help desk. I'm that guy.

And hey, I've been an advanced beginner before. For example, I distinctly remember the first time I tried to do Active Directory scripting with PowerShell. It was painful. But gritting my teeth and working through the pain, I became comfortable with it. And, now that I know my way around, I'm able to do some cool (simple) things with PowerShell and Active Directory.

This is a common pattern—I've been an advanced beginner elsewhere, worked through the pain, and by the end, I was comfortable and was feeling more confident.

What I'm trying to articulate in these long rambling paragraphs, is that as I learn more about SharePoint via experience, I become comfortable with pieces of the framework and trust the framework less. As I learn more, I trust SharePoint less. Today I'm totally comfortable in telling you that InfoPath is absolutely great for simple data entry, for simple review scenarios, but nothing else*. And this isn't just InfoPath. As I work my way around the framework, I am becoming comfortable with individual pieces, and as I do so, I am trusting each piece less.

And let me set the record straight. I'm less interested in whether a particular technology is Turing-complete—whether it can theoretically solve any problem—I'm more interested in whether using that technology is a good idea in the first place. And as I learn more about SharePoint, the world of good ideas seems to shrink.
*note for InfoPath nitpickers: yes, it's useful for other things, but yes, it's also easier for me to say "good for nothing else" than rattle off 15 edge cases.

### Do you want to become proficient?

I think what is bugging me most is that I don't want to be a SharePoint apologizer. Not apologist, apologizer.

Oh, sorry, that doesn't work. Sorry, yeah, there's a bug. No, yeah, you heard it was great for ECM; well, yeah, it's cheaper, but we'll have to customize a lot. Oh, yeah, that's undocumented. Oh, no, we hit the list size limit early. Yeah, no, SharePoint Designer is unmaintainable, don't build solutions with it. Oh yeah, factor in cost of upgrades on every SharePoint project.

Last week we found out "the hard way" that SharePoint can't handle 12000 documents in a single folder, no metadata, no versioning, nothing fancy. Performance whitepaper aside, 12000 documents shouldn't be a problem, I don't want to hear about it—this is a bug. And meanwhile, from the other end I'm attempting to fend off a total migration of terabytes of network file shares to SharePoint, because "this should work, right?"

I don't want to progress with SharePoint, if my energy is going to be spent tackling the wrong challenges.

### Add in the cost of learning

Something that I'm maybe not emphasizing well enough is the point that in theory, all things are possible in SharePoint. And if the cost of becoming a SharePoint expert was zero, SharePoint development would be no question, worlds better than coding by hand. No contest, no question, better.

But learning isn't free.

### SharePoint gold rush

Dux Sy refers to a SharePoint gold rush. It's real. It's real in part because the time cost of learning SharePoint is so high. Chris O'Brien (I can't find the entry on his blog) Shane Young states that you should be wary of hiring developers if less than 5 of their last 10 projects are SharePoint projects. It's expensive to find people with that kind of SharePoint experience. Thus, high prices, thus, gold rush.

This equation works for all Enterprise solutions; when I talk about how SharePoint is aggravating my inner engineering brain, trust me, I've worked with other Enterprise systems, and they're worse. SharePoint's a great "Enterprise" product—believe it or not, Microsoft is quite open compared to other vendors, who require all independent consultants be licensed by the vendor, or who provide no public documentation at all—all these extra hurdles artificially inflating consulting prices. Microsoft thankfully doesn't do these things (at least that I notice). So, it could be worse.

### No more SharePoint on my blog

I've posted exclusively SharePoint content for a while now, and as I do so, I notice my attitude continues to trend negative. It's my doom-and-gloom engineering brain, it can't just let it go. So, instead of ranting further, I'll just, hmm, let it go. No more SharePoint 2007-related topics on my blog, even if a potential topic is brilliant (WHICH IT INEVITABLY WOULD BE!). No more SharePoint content unless I just can't hold it in.

If you've subscribed to my blog exclusively for the SharePoint bits, sorry; now is the time to unsubscribe.

I'm also officially done learning SharePoint 2007 development, permanently—no more off-hours learning, no more digging through my humongous SharePoint blogroll. When SharePoint 14 is announced, I'll evaluate it to see if they've fixed anything—to see if the situation's improved any. If so, great! Maybe SharePoint 14 will realize the potential of what some are calling "the first real application development platform from Microsoft." There are some awesome business problems that SharePoint already attempts to solve—that with just a little extra help, SharePoint could be a powerful solution. It could be dominant, even.

But if SharePoint 14 is more of the same; if they announce SharePoint Designer Designer 2009, and we're all given the "look what you can do without coding!" demos, and the BDC is still 14 pages of XML without proper tooling, and pieces of the framework are "secretly known" to not work at launch (like the PRIME API), and SharePoint still produces noncompliant HTML out of the box, and I'm still handwriting Feature manifests, and I'm still handwriting CAML, and I'm staring at a doubly-XML-encoded internal field name passed as an argument to a custom XSL function emitting HTML and all this is wrapped in a XML-encoded WebPart tag, and a new SPDataGridView is released and it adds 50 more undocumented methods and properties, and we need to rewrite all our SharePoint 2007 customizations just to make them work with v14…well, we'll see. If MSDN rolls out a SharePoint 14 resource center and it has 3 articles and 50000 useless Sandcastle-generated stub pages—gold rush or no—it may be time to bail.

### So what are you going to focus on, Peter?

I have three words for you.

FUN.
DUH.
MENTALS!

Project management (not getting a PMI cert; instead, the meat of project management). Proper object-oriented design. Programming languages (plural). HTML/CSS/JavaScript (they're not going anywhere).

Personal projects!

Programming for fun—something that may even inspire me!

And hey, maybe SharePoint 14 will be a pleasure to work with; maybe I'll jump on that bandwagon…I'm the guy who owns sharepoint14.com, after all.

Categories: SharePoint
Technorati:
Tuesday, September 23, 2008 3:10:01 AM UTC       |  Comments [3]  |  Trackback
Monday, September 22, 2008 8:00:22 AM UTC

Estimating is difficult to begin with, but add on

• a new technology stack with which you're unfamiliar,
• sometimes unreliable and often undocumented API (I can back this up with examples, just take my word for it),
• tight integration/reliance on Active Directory, SQL Server, IIS, Exchange, Office, and Internet Explorer, all of which may fail in difficult-to-debug ways;

Given all this, how much risk are you taking on if you provide a tight estimate? If you haven't tackled three or more Visual Studio Workflow projects, for example, how do you even go about getting a ballpark estimate on your first one?

On a recent project I put a 10x difference between my lowest and highest estimate for an SAP integration task, and everyone complained to me about how "there's too much variance." Well, what should I put there, I've never tried it before!

This isn't a passive-aggressive way of trying to win a work argument via my blog. If I start doing that, dude, let me know, I don't want to be that guy.

But it is a cry for help. How are we supposed to estimate SharePoint development tasks? If you're not an experienced SharePoint team, and by "experienced" I mean experienced in building the exact same solution, using the exact same method calls on the exact same SharePoint objects—if you're not truly experienced, then how do you even attempt estimating?

When you may be blocked anywhere between zero to ten times on a task, and each blockage may take up to a full week (or more) to resolve?

In traditional build-it-from-the-ground-up systems, you can estimate the complexity of your requirements and envision a rough idea of what your solution will look like, and compare the complexity of your new task with similar tasks you've done in the past. This works for traditional development, so long as you double your estimate and increase it by an order of magnitude :). The idea is, when you're building it from the ground up, estimating is "as simple as*" estimating your effort based on the complexity of the problem.
* saying estimating is "as simple as" anything is an implied joke; you may laugh now

But if your task says "write a custom field that pulls data from SAP," or if someone says "hey, let's build this web part with Silverlight!" …how do you put down an estimate? When the task, performed by an experienced expert (see above for my definition of experienced) takes an hour, and in reality the bulk of your time is spent discovering how to do the one hour task?

What do you do when your honest estimate is, "this will take between 40 and 400 hours?"

### I don't have the answer

I'm just throwing this out there because no one even talks about it. So here it is. When you're on a project and someone asks you for an estimate, and you answer "4 hours to 4000 man-months", and they laugh, hey—you're not alone. I can't figure it out either.

UPDATED 2008-10-21: I made mostly grammar fixes—so if you notice some changes, no, you're not going subtly crazy, this post changed a little.

Categories: SharePoint
Technorati:
Monday, September 22, 2008 8:00:22 AM UTC       |  Comments [7]  |  Trackback
Syndication

Search
Categories
I make no apologies for this gratuitous tag cloud:
Sites I visit regularly