Wm. Ruffin Bailey
Putting my experience on display

Web apps & SaaS



Grantmaking Project Lead

As project lead on an 18-month rewrite of Blackbaud, Inc's Grantmaking platform, an approximately $30 mil/year revenue product, I was tasked with updating a legacy CSLA.NET based application whose client-side was mired in a Silverlight-dependent presentation stack.

Prior to my arrival, Blackbaud had begun a complete rewrite of the Grantmaking system using their own Azure-hosted .NET MVC microservice stack utilizing Angular 2+ and Mongo DB, but their attempts had missed several deadlines, including Silverlight's deprecation by Microsoft in 2012 and its loss of support in every major browser other than the latest versions of Internet Explorer on Windows. Competitors were selling against the stack, saying that we had no plans to support clients in the future.

After getting the lay of the land, I proposed a new MVP focus to ensure clients would have a new system to use before Silverlight's October 2021 end of life date. The key points for this new focus were:

  • Keep and maintain the legacy system's mature and complex business logic, written in C# and SQL Server using the CSLA framework.
  • Create new Azure cloud microservices in the Blackbaud ecosystem using .NET MVC and MongoDB to...
    1. Act as a secure proxy to the legacy system's API
    2. Support Blackbaud Single Sign-On and support inter-service interoperability/upsells.
  • Use an 80/20 Rule mentality to port the existing Silverlight UI to Angular 6, leveraging the existing UI as a "high fidelity prototype" rather than reconceiving the entire system.

A short time after presenting this plan, I was shifted from team lead to project lead. I worked closely with product management to define strategy, make tech stack decisions, create the backlog (from the epic, feature, and milestone macro to Scrum story micro), and set development priorities for the four development teams. I also continued as a key hands-on contributor, coding about 50% of each working day to demonstrate best practices, review new code, and complete complex tasks.

Within a few months, the newly architected system was running well enough to allow entity entry -- something the preceding rewrite had not supported even after four years of development (no kidding!).


    MessageFactory

    MessageFactory is a .NET MVC system that facilitates the surprisingly complicated process of laying out direct mailings for one of the country's leading "full service direct marketing" companies, who serves several Fortunate 500 companies. It seeks to replace arcane, in-house systems of emailed Word documents, shared folders of dated image files, and complicated management structures with a streamlined system that handles asset management and versioning, layout composition, and automatic manager notifications.

    Work started when the system was still in its relative infancy, as a team of two developers (one in-house and myself as a contractor/consultant) scaffolded the system from scratch. I worked mostly on three areas, the original job edit screens, asset management pages, and then the notifications system (for which I was solely responsible, schema to client) over the course of just over a year. All sections used Telerik web UI controls (except administration pages), .NET MVC, and LINQ to Entities to talk with SQL Server.

    Other responsibilities included creating a plugin for the ckEditor web editor plugin and the use of a zero-dependency JavaScript tag UI widget I wrote over a break, tagifyJS.

    Additional screenshots, in png format:


    HIPAA Secure Chat

    Secure chat was a project that required the creation of a system-wide chat system to backfill into an existing healthcare practice management SaaS. Beyond simply integrating with the current code, a two-headed client with legacy KnockoutJS code partially ported to ReactJS, the new chat system also had to keep HIPAA-compliant privacy and auditability requirements in mind.

    The client UI was reasonably straightforward. The stack was ReactJS, MobX, and JSX, transpiled by Babel. There was a dedicated chat interface that was a standard module, managed by a homegrown module router for React that could be coded conventionally. The pop-up UI that needed to appear within the top-level menu throughout the application, however, required creating and managing a new MobX datastore to ensure all chat data was available even in sections that had not been ported from KnockoutJS. But, again, reasonably straightforward.

    Start a Chat and pop-up UI

    The server side was a bit more convoluted. Before I started, the client had licensed with an XMPP provider to power the chat system. Though the XMPP provider, QuickBlox, did support secure AWS hosting and excellent auditability (all messages were saved and backed up nicely), XMPP remains more suited to Trillian-style Internet chat from the early aughts than secure HIPAA messaging in 2018. The minimally invasive solution was to block outside access to QuickBlox' messaging endpoint, and handle all sends via custom proxy middleware that would integrate with the practice management software's existing rules, roles, and rights. Within this middleware, user logins could be assured, and messages forwarded to the XMPP server only when sent to recipients who at that moment had valid relationships.

    There were a number of edge cases to cover, as when group permissions might change, and a group "host" lost their relationship/permission with a group member (or vice versa, when a member lost their relationship with the chat group host). To ensure history wasn't lost to any group member, such situations required freezing the conversation, preserving access, but preventing any new messages from being sent until all group members regained active relationships with the group host.

    QuickBlox itself also provided some issues, with unexpectedly high resource use on its AWS instances that required reboots and with login and command throttling when accessed from specific IP addresses, like the custom middleware that had to proxy sending all users' messages.

    The server-side proxy was refactored to optimize login management to account for these issues, providing a service solid enough that the SaaS company itself used the new chat service as an in-house replacement for Slack after just a few months of development.

    Additional screenshots, in png format:


    PeopleMatter

    Save for Later status PeopleMatter was a SaaS solution for hiring, training, and scheduling applicants and employees who work in the services sector, since purchased by Snagajob.

    Work for PeopleMatter centered on two pieces of the product: Schedule, the module that creates schedules for employees and different stores, and maintains a running count of their hours worked and costs to staff, and Hire, where applicants to companies are tracked, interviews scheduled, and onboarding tasks (I-9 forms, workforce eligibility, training) can be managed.

    Schedule work was done using .NET MVC with C#, SQL Server via NHibernate on the backend, and KnockoutJS templating for the client. It initially was a seven team [sic] project that leaned exceptionally forcefully on maximizing data on the client's browser -- the JSON payload routinuely went over a megabyte when testing medium-size sample companies. Similarly, the client contained especially complex JavaScript view models bound to KnockoutJS templates to manage that data. Work here centered largely on client validation, ensuring proper translation of business rules from server to client code, speed optimizations, memory management, AJAX interactions with server controller actions, and data serialization.

    My most important work on Schedule was probably when, a few weeks from a contracted release, I was given the task to look at performance on Internet Explorer 6. Schedule was using infinite scroll, where scrolling down the schedule would continually load more entries into the UI. This worked well in Chrome, especially on faster machines with recent processors, but on slower machines -- or any machine using Internet Explorer 6 -- performance for sizeable schedules would quickly crater.

    After letting management know, "It's dead, Jim," I took a few weeks to work with a designer and non-destructively add paging to Schedule for Internet Explorer 6 and 7 only. It was an exceptionally defensive fix, could be turned off by changing one boolean in the client side code if we chose not to use paging, and could change the number of entries on a page by changing one integer, even pitching different numbers of entries for different browser versions and types. This was a minimal risk fix with a potentially high reward.

    Since all of the data for each schedule was downloaded in a giant, originally single-use JSON payload on page load, it was only necessary to add a de/serialization routine to the JavaScript code for a "person-schedule", the foundational object model used on the page, to and from that JSON payload. As pages made edits saved to the server, those changes to person-schedules were serialized into and out of the client-side JSON datastore so that fresh information was accessed during paging, allowing slower browsers to make the same edits with much less DOM overhead than the original design required -- and to do so without any changes or new interactions with the server. This did duplicate server-side serialization logic, and was a band-aid on a much more serious performance issue (and an architecture that didn't scale as well as was needed), but a few weeks of triage work, done without any server-side churn, allowed Schedule to ship on time to all users, particularly to a major "homepage logo client" whose internal network only allowed the use of IE6.

    There is a pretty good training video that demonstrates in Schedule in less than two minutes here, with a local zipped copy in case that link ever breaks.

    The Spring 2014 release concentrated on Hire. To help make changes in applicant status auditable, we added a new "Status Change Reason" interface. Each location in a company could optionally turn on this audit function which would, for instance, collect reasons when one applicant was promote to a job candidate, or had an interview scheduled, and would also capture when an applicant for a job had been rejected. Now, instead of simply having a note of when a change was made, managers could look back and now why those changes were made and defend the decisions if audited for age or other demographic. This work took on quite a bit more refactoring than it might appear. Each status previous had its own separate workflow in the code, and this work largerly pushed them all into a shared process that reused dialogs and centralized tech debt heavy code.

    Save for Later was a much easier task, and simply added a new searchable category for applicants. It was essentially a bookmarking feature.

    Additional screenshots, in png format:


    Army Mapper

    Note: Please view the Mapping page to view work at SYNCADD with Army Mapper. Though it was largely database work, it is more specifically a mapping/GIS project.


    DHEC Immunizations Registry

    Add new record mini-frame on top of demographics tabThe Immunizations Registry (IMZ) for the South Carolina Department of Health and Environmental Control (DHEC) was an alpha rewrite of their previous system written in Delphi, maintaining the business logic of their legacy DB2 database. The IMZ allows pediatrician and other doctor offices in the state of South Carolina to manage registered children's scheduled immunizations, both those already administered and those due. Project included learning the schema of the existing database in DB2, coding user interfaces with detailed data quality assurance requirements in .NET and jQuery, and connecting the UI and data tiers through a shared middle data tier, then also in development.

    This system did have fairly complicated business rules, like figuring out how many days old a baby is (three months from June-August isn't the same number of days as January through March, and some shots counted one way and some, apparently, another!) and how long after an initial shot could another be administered and still be effective. HIPAA considerations also meant you had to give pessimistically constructed searches. Accidentally giving out every Smith in the database would be a really bad, and potentially illegal, thing to accidentally do.

    Note that the relatively tight width and plain design was, in part, due to conservative rules about users' machine, ensuring that fairly old browsers and even older hardware would still be able to access the system easily. The interface and data tier were written in the space of approximately two and a half months (ramp-up included).

    Additional screenshots, in png format:


    SpiderSavings.com

    SpiderSavings.com Recent Coupons ThumbnailSpiderSavings.com is the online portion of Spider Savings, a marketing company that helps local companies with print, video, and online advertisements. SpiderSavings.com has three sections, one for end users, one for coupon providers, and another for the Spider Savings' employees to administer the site.

    This project involved creating a new database schema from scratch, allowing image manipulation on the server to create and design coupons, accept web payments, produce and send SMS messages, and use AJAX to provide thick functionality within a browser.

    Additional screenshots, in png format:


    Coastal Services Center Management Information System (MIS)

    CSC Management Information Systems Homepage ThumbnailThe MIS assists CSC allot its fifteen-plus million dollar budget and nearly one hundred employees over scores of projects. This application uses Microsoft SQL Server running on Microsoft Internet Information Server (IIS) and creates a web interface using vbscript-powered Active Server Pages (ASP). Javascript is used to ensure data entry meets specific criteria, and server-side processing rechecks before entering data into the MIS database. This project is an intranet-only site.

    Additional screenshots, in png format:


    Information Request Tracking System (IRTS)

    CSC Information Request and Tracking System ThumbnailCSC solicits online customer information on a voluntary basis and the IRTS collects and reports this information for internal use. The system both requests information from users who are downloading products, including email address and organization type, and reports these variables to the lead webmaster for yearly reporting. The IRTS is also used to create mailing labels when past customers request new products. This site is another MS-SQL Server driven ASP site, and its reporting capabilities are intranet-only.

    Additional screenshots, in png format:



    CSC Product/Project Description (PPD) Maintenance System

    CSC Management Information Systems Homepage ThumbnailAs products at the Coastal Services Center became more numerous, a products maintenance system was created that allowed employees to create entries for new products as they were being produced along with the ability to post-date their release on the external web site. A second level of review was built in to allow managers to "green-light" entries and later edits before releasing them to the web.

    One major advantage of the system was that the database template was passed through the normal review channels for web design, ensuring the Center-wide, mandated look and feel was utilized, but new products could be added without additional, intra-office review.

    Administration page mock-ups (uneditable; for viewing only)




    NCSU CRDM Students Page

    NCSU CRDM Students PageAs the North Carolina State University's new Communication and Rhetoric of Digital Media program added new classes, keeping the students information page up to date was a headache. This PHP/MySQL system allowed students to make their own edits and additions, pushing the maintenance from the college's webmaster to the system with its pre-approved template design.


    Description of development process for NCSU CRDM student information page

    Description Page