Information developer or software engineer? Or something in between?

I have a really hard time describing my role to people outside of IBM.  If I wanted to be lazy and did not want to go into details and explain all my different projects, I would likely quickly state that I am a technical writer.

I do not consider myself to be a technical writer though because I believe the term is so limiting. I worked as a web developer for five years while I was completing my Bachelor of Science degree in Technical Communication, which is from the department of Human Centered Design and Engineering at the University of Washington.

Most of my work experience has been in the realm of web development and my degree work was an even mix of user experience engineering and technical writing. Continue reading

Graphical aid for generating object setup scripts (patent pending)

I joined IBM as an information developer intern (co-op) in June of 2005.  Dell Burner, my team lead, was working on a project with our visual designer and user experience engineer to create a quick start visual for setting up WebSphere MQ for use with Q replication. Q replication is a database replication technology that is extremely high performing due to its architecture and infrastructure. The integral piece of making Q replication out perform standard SQL-based replication is that it pushes its instructions over WebSphere MQ message queues.

WebSphere MQ administrators are a whole different audience than DB2 database administrators.  These two different products are like night and day. DB2 database administrators typically don’t know anything about WebSphere MQ and also are not likely allowed to touch the WebSphere MQ configurations.

Q replication was facing a major customer pain point when it came to configuring WebSphere MQ. The DBAs were not skilled in the technologies needed to implement it. In most shops, there were communication and procedural barriers to getting the environment configured. In most cases, the best case scenario for getting just the WebSphere MQ portion of the environment configured was a minimum of 1-2 days. If the DBAs were working in a z/OS shop, that time frame could be even longer.

Of course a best case scenario means all parties involved are well versed in their requirements and can communicate those from DBA to WebSphere MQ admin.

In an effort to ease the learning curve of Q replication, Dell Burner, Daina Pupons-Wickham, Kevin Lau, and Michael Mao undertook an effort to create a diagram based document and supporting instructions for assisting DBAs to create their setup scripts that they would ask the WebSphere MQ admins to run. Continue reading

Greasemonkey script for Lotus Connections dynamic drop-down menus

Install this script from IBM Lotus Connections Drop-down menus

IBM’s Lotus Connections software is an enterprise social software application that consolidates a lot of collaboration and information sharing within the enterprise to a single location. We use Lotus Connections quite extensively within IBM and it has been an amazing improvement over many of our previous tools and methods for project, team, and professional collaboration.

The recently announced Lotus Connections 2.5 features communities, blogs, wikis, file sharing, social bookmarking, activitiesprofiles, and a customizable dashboard.  I can tell you that I use every single one of those features on a daily basis and its been a great step forward in so many ways. Knowledge is being better shared across our enterprise and it can be found more easily. Projects and teams can stay organized and on track. To-dos can be assigned and tracked.

While my overall satisfaction with the product is quite high, one small design issue bugs me. Thankfully, Lotus Connections is a web application that I can use the Greasemonkey addon for Firefox to alter aspects of the design.  I often find myself having to switch between sections in Lotus Connections–On a wiki page and need browse to a wiki section within a community.  The default design has a top bar menu with only top-most navigation items:


Getting around Lotus Connections sometimes requires clicking through multiple unnecessary pages.  I decided to add drop-down menus to each menu item to reduce the number of clicks to get around. I eventually discovered the ATOM feeds for each Lotus Connections area and determined I could add links for specific areas such as a list of all your communities or a list of all files that have been shared with you (had to blur out the actual files in the screenshot):


Overall, this script helps improve an already awesome application a bit more.  I’m hoping to convince the Lotus Connections teams to implement some kind of navigation that either matches this experience or something similar for being able to more quickly navigate to specific items in your network.

Using this script

You can use this script on, which is an IBM implementation of Lotus Connections 2.5. You can also customize the locations that this Greasemonkey script runs to include your own installations. For example, if you want it to run against* you would include that location:


If you are an IBM employee, you can find an internal version of this script for use with our own internal deployments. Check my internal Connections Blog by searching for Greasemonkey.

Improving navigation for IBM’s My developerWorks with Greasemonkey

I recently released a version of my Lotus Connections greasemonkey script for adding dynamic drop-down menus for use with IBM’s myDeveloperWorks. You can download this script from IBM My developerWorks drop-down menus

Improvements to myDeveloperWorks

MyDeveloperWorks is awesome but sometimes navigating between sections gets annoying if you are someone that constantly uses this site. Jumping between Blogs to your Groups could involve multiple page loads and clicks by default. This script can help eliminate that unnecessary browsing by adding drop-down menus to the site’s existing menu.

Default menu on myDeveloperWorks

Default menu on myDeveloperWorks

When you install this script, it will add some standard drop-down menus and it will also load your specific items such as your groups, bookmarks, and activities.

Showing dynamic and customized drop-down menus that are added by this script

Showing dynamic and customized drop-down menus that are added by this script

Script details

Interested in the code?  This Greasemonkey script uses jQuery 1.2.3 to perform most of its work. jQuery makes creating Greasemonkey scripts amazingly easy to work with the Document Object Model (DOM) as well as adding fancy effects and performing Ajax calls.

jQuery is mostly focused on the user of selectors to uniquely identify content in a similar manner to CSS selectors.  This selector behavior is awesome for greasemonkey scripts because you cannot always easily parse a site’s HTML with standard Javascript and the code becomes extremely complex.  jQuery allows you spend your time on improving the user experience rather than deal with the plumbing.

Great Outdoors sample database (GSDB)

In May of 2008, I switched roles at IBM. I moved from being your standard information developer (technical writer) to taking on a much more open ended role that I often have a difficult time describing, internally titled “Information Development Infrastructure Lead” for the IBM Integrated Data Management portfolio of products.

One of the facets of this new, broad role was leading a team of developers, tech sales, information developers, and subject matter experts in the creation or adoption of a common sample database for the entire portfolio. The IBM Integrated Data Management portfolio is mostly comprised of IBM Optim products and IBM Data Studio and some other tools thrown in. Overall, the portfolio has potentially 30 or so products with their own database requirements.

Adopt or create?

IBM has a variety of sample databases around the organization. Some are small, some are random, some for very specific situations, and very few are realistic or rich for a broad use.  My first task was evaluating all of the potential sample data sets that were available to our teams and to try to find one that best fit our requirements.

Key requirements

The outcome of the project needed to meet some key requirements for our products:

  • Story or scenario that drives usage and design of the database.  Must be be similar to a realistic customer environment to help relate our tools and information to solving the customers’ own problems.
  • Support the wide variety of databases that our tools support, including DB2 for Linux, UNIX, and Windows, DB2 for z/OS, Oracle, Informix Dynamic Server, and Microsoft SQL Server.
  • Support key product scenarios and features, including data warehousing, data mining, modelling, application development, and general database tasks.
  • Support XML capabilities in the various databases.
  • Support spatial data teams (mapping data).

Very few of the databases my team investigated came close to meeting any of these goals.

Cognos acquisition

Earlier in the year, IBM had acquired the Business Intelligence software company Cognos. Cognos brought to IBM an extremely rich, realistic, relevant, and widely adopted sample database known as the Great Outdoors sample database (GSDB).  This sample database is based on the operations of a fictional outdoor distributor known as the Great Outdoors company.  The data set included 5 schemas: business-to-business sales data (GOSALES), warehouse data (GOSALESDW), human resource data (GOSALESHR),  retailer data (GOSALESRT), and marketing data (GOSALESMR).

This database was by far the closest match to our portfolios requirements. The database prior to our work already had the following features:

  • Based on a story about a fictional business.
  • Five schemas that covered different types of business data.
  • Localized and translated.
    • All text within the database is available in 25 languages.
    • Data is spread out across the world and can expose different scenarios in different regions and countries.
  • Initial support for DB2 for Linux, UNIX, and Windows; Oracle; and Microsoft SQL Server.

Expanding the GSDB database

To meet our key requirements, we required some additions to the sample database.  I had to carefully navigate internal political issues with this cross-organizational project and be careful with the acquisition teams to ensure that all interested parties were happy and the relationships continued to be beneficial for all parties.

Working with the Cognos teams, we prioritized and planned for the addition of a new schema and addition to the story that the database follows. To meet the needs of our sales enablement teams and also our data mining teams, we required business-to-consumer data.  This required the addition of customers as individuals rather than businesses, transactions, customer interests, fake credit data, and a variety of other information.

The  data mining and data warehousing teams had the most challenging to implement requirements of any team.  The data that we added needed to be quite deep as well as intelligent. To demonstrate the data mining tools we needed to add associations, patterns, time series, and trending to the B2C transaction data. Before the transactions could be generated, we had to first add approximately 30k new customer records.

Adding customers

The customers that we needed to add involved more manual work than some of the other aspects of the expansion of the GSDB database. We involved teams and individuals around IBM to help assist with the data collection.  Eventually over 50 people across many countries had helped with adding the following types of data:

  • First and last names that were representative of the population of a country and culture. For example, some cultures have different surnames patterns for females such as Czech female surnames that end in “ova.”
  • Translated or localized version of all data. For example, city names in their English  name and its local name , such as Bangkok and in Thai กรุงเทพมหานคร.
  • A variety of city names, state/province names, street names, and corresponding postal codes and phone codes.

All of the above data was gathered and then randomly mixed up so that we could generate as additional rows by combinations of first names, last names, addresses and cities. Also, we did this to ensure that all data was random and that any similarities to a real personal were entirely by chance. Additional data was added in to these records randomly such as age, profession, marital status, and hobbies.

Generating complex transactions

One of the members of my extended team was a developer and subject matter expert that worked with the data warehousing and data mining teams. He was able to create an application for us that would read our new customers table and generate orders either randomly or using some logic based on age, location, profession, marital status, and hobbies.

The application read the approximately 30k customer records and generated a half million transactions for those customers. Patterns, associations, trends, and other intricacies were added to these transactions so that our complicated mining tools could discover and expose to demonstrate the value of the tools and how to use them.

GSDB sample database usage at IBM

Our new version of the sample database is starting to gain wide usage across IBM’s Software Group. Many teams are writing developerWorks articles, tutorials, and demos based on the story behind the data and the data itself. You can see many of those using the following query:

One of the first projects to use the sample database was the IBM InfoSphere Warehouse information development team, which created three tutorials around the sample: SQL warehousing, cubing services, and data mining.

Of course, the Cognos products also continue to make extensive use of the data within their product line and by a variety of teams including information development, quality assurance, education, and sales.