April 18, 2008 | Comments Off
Recently a friend and I started talking about possibly doing some technology podcasts. Our general concept is to have a podcast that talks about how developers keep current with the never ending sea of new technologies, languages and developer kits (SDKs). It is both daunting and exciting to always be learning new technologies. The structure of a weekly or bi-weekly podcast that dives into a new technology or concept each time will give us a means for exploring new things that interest us, and might also be useful to other developers.
Well, if we are going to do a podcast we need gear. I didn’t have anything at all, but my friend already had a good microphone and preamp from other audio work he does. I spent a few weeks educating myself about the various alternatives and finally settled on the following setup to get started.
- Sennheiser HD280 Professional headphones for monitoring the audio.
- Edirol UA-25 USB audio capture and MIDI interface.
- Rode NT1-A Studio/Live Performance Condenser Microphone.
- Atlas Sound DS7E desktop mic stand.
- Shure Studio Mic Pop Filter.
- ProCo StageMASTER 10′ mic/audio cable, XLR.
As you can see in the first picture, the Rode NT1-A fits nicely in the Atlas desktop mic stand. The Shure pop filter attached to the mic stand at the base and was easily positioned in front of the mic. The only issue I have noticed with this setup so far is that unless the pop filter bracket is swung to the side it causes the whole stand to tip forward a bit due to the weight of the mic and pop filter. With a little adjusting I found a position for the pop filter that seemed to keep the weight distribution in check.
I had considered a more professional desktop swing-arm mic stand, but figured that I would only be recording about once a week. I didn’t really like the idea of having that swing-arm attached to my desk all the time. It just seemed like that would become a distraction. The desktop mic stand is very portable and I think that will make it easy to move it out of the way when not in use. My friend went the other route, getting a desktop swing-arm, in part because he has a limited amount of desktop space.
It only took about 15 minutes from the time I opened the BSW box of goodies until I had the whole setup running with me MacBook Pro. The Edirol UA-25 was recognized by the MacBook Pro right away. Here is a picture of the whole setup.
The desk is kind of messy right now. I will be experimenting over the next few days with how to better organize all of the pieces. My MacBook Pro is held up off the desk by a Ergotron laptop swing-arm, and is also hooked to a 24″ Samsung SyncMaster 245BW monitor. One issue I have right now with the arrangement of the equipment is that the mic gets in the way of the lower half of the 245BW. Since I expect to be recording some podcast screen casts using ScreenFlow I really want to workout a placement for the equipment that doesn’t get in the way of using the screen, but still keeps the microphone positioned for good sound pickup. Once I figure out a setup that works for me I will post an update here.
April 4, 2008 | Comments Off
The following was seen recently, buried in the responsibilities list of an employment advertisement seeking a skilled Director of Software Engineering to lead a small team developing a new software product.
Make time for continuous re-factoring and improving existing code and unit tests, to keep the codebase clean, maintainable and fun to work on
What makes this so interesting to me is that there aren’t more senior level software engineering positions requiring this sort of investment in the organizations’ future. I really wish more companies would recognize that it is very important to aggressively carve out some fraction of the engineering time in each release and allocate it to re-factoring and improving existing code.
Every place I work it seems like I end up being the sole voice for continuous improvement and refinement within the existing code. It always feels like I’m trying to sell snow removal equipment in the Middle East, or that I’m pushing 10 times my weight up a steep incline.
Anyone who has spent time in Startup Land tm will be familiar with the oft-heard refrain “There isn’t time to do it right, just make it work.” This is destructive on so many levels; to the product, to the team, to the company, to share holder value, etc.
The current mind set is basically to pour as many new features as possible in each subsequent product release and to fix some percentage of the bugs that exist in the system. When architectural deficiencies surface within the code-base management will often choose to defer addressing the issue (i.e., The issue is shelved indefinitely.)
In essence this behavior is a race to the bottom, both for the development team and the product. As more features are added and more work-arounds are implemented to fix bugs the software becomes more and more brittle. This cycle continues until it is impossible to re-factor at all. The inevitable result is a full rewrite of the software or the death of the product as the company can no longer keep pace with newer competitors that are not constrained by this legacy.
I once heard Grady Booch [Blog, Wikipedia], co-inventor of UML, speaking at a conference about the state of our industry. He used the analogy of building dog houses vs. sky scrapers to make a similar point. His position was that without time spent on architecture, object-oriented design and continuous re-factoring it was impossible to build anything as complex as a sky scraper. He recognized that many teams have a grand vision for their software, but unless they embrace software architecture as a discipline they will never be able to stack up multiple dog houses to create a sky scraper. It just isn’t possible. I encourage you to read through his PowerPoint presentation on the topic or view a PDF here.
A significant change in the mindset of software managers is needed in our industry. The lifespan of most successful large-scale software systems is at least 5 years, and many systems are in productive use for decades. To remain agile and responsive as a company the code-base must be re-factored, improved and refined at every step along the way.
April 2, 2008 | 2 Comments
I spend most of my time developing software in C/C++, and have done so for many years now. One thing that always catches me by surprise is when I find code written by someone who decides to put implementation details of a function or method in a header file. The only reason I can come up with for why developers would do this is pure laziness.
Let’s face it, C and C++ are such a pain in the but. I mean, each ADT or Class needs two files; a .h and .c/.cpp. It sure would be easier to just lump all the stuff in a single file. Well, since C/C++ can’t generally handle that we might as well sprinkle some of the code into the .h file and some of it into the .cpp file. The compiler really doesn’t care where the code is. And, to top it off, switching between files in most editors takes at least one keystroke. Whoooaaa – what a burden for a busy developer.
Of course, this has been written about by numerous people on the web as well as having been covered by many technical journals and books. The concept is to keep the interface in one file and the implementation in another. I am still amazed to find that people routinely place implemenation details in the header files of C++ classes. All this does is slow down compile times. Some might say “So what, it doesn’t really matter anyhow. The program still compiles.” Well I say life is too short to spend all your time waiting for the compiler to recompile code that it already compiled, and for which there is no actualy reason to recompile at this point.
It all comes down to dependency management. Developers need to understand how the pieces of their programs interact, how that interaction impacts the build speed for the application, and be aware of techniques they can use to minimize the dependencies between components in their applications.
So, why is it such a bad idea to put implementation details into a header file? There are a number of design principles that would suggest separating interface from implementation pays dividens over the lifetime of an application.
When implementation details creep into an interface header everyone suffers. The compiler has to recompile many more modules each time an implementation change is made. Usually these recompiles are completely unnecessary since all the other modules require is the interface definition from the header file. The problem is that the compiler cannot distinguish an interface change from an implementation change when all of the code is packed into a single header file.
By keeping interface and implementation seperate compile speed will improve. On all but the most trivial projects this is an important thing to keep in mind.