Revenge of the Y2K Bug - Some 'Fixes' Strike Back

Loraine Lawson

The experts always add a lot of addendums and explanations whenever anyone says it, but by now we all know that one of the main drivers for service-oriented architecture is to simplify integration, particularly legacy systems.

 

But it turns out, when they're digging around in that legacy code, organizations sometimes uncover something unexpected: Bad Y2K fixes.

 

You read that right. It turns out, companies using SOA to integrate legacy systems are unearthing ugly code dating back to the Y2K fixes, according to a recent Joe McKendrick post, cleverly titled, "Blast from the Past: Will Y2K Patches Trip Up SOA?"

 

And as you can expect, it's slowing SOA down and adding costs to SOA implementations. McKendrick takes a very philosophical view of the situation, writing:

Sometimes, in the interest of expediency and budget, things were buried behind remediation layers -- things that have to be addressed as they are service enabled. Perhaps things would have gone a lot smoother if we had SOA back in 1998. Then we could have just done the remediation in the service interface. And we would have had a great business case for SOA.

Ah, the irony. The bitter, expensive irony.

 


You can thank Don Fowler for figuring this out. Fowler is an IBM-certified SOA solution designer and System z technical sales representative for Vicom Infinity, an IBM business partner.

 

McKendrick linked to Fowler's article, which actually ran more than a year ago in zJournal. I asked McKendrick why he's writing about this now. Back in the day, he covered the Y2K problem quite a bit, and he'd long been mulling over the potential that it might resurface. While he hasn't heard of any organizations that have actually encountered the problem, he pointed out it's something companies probably wouldn't want to talk about it.

 

So, I contacted Fowler and asked if he still considered some of the Y2K "fixes" a ticking time bomb and had he written anything since the January, 2009 piece.

 

Yes, he responded via e-mail, it's definitely a problem. In fact, he's working on a remediation project at "one of the largest financial institutions" to correct upcoming failure situations:

I raised the alarm via the 2009 article. All a follow up article would do is point out specific live code examples that we found that will fail and some statistics on total software inventory to infected members of that inventory. We did find several cases where failures were already occurring and data store was being corrupted.

In particular, Fowler says he would be concerned about any enterprise that had planned to retire an application between January 1, 2000 and around 2010:

Did that application actually get retired? If not, did that anticipated retirement affect the remediation decision made on how to resolve the original Y2K issue? An example might be using a Y2K repair action called "windowing" and setting the window "pivot" to 50. Any year variable greater than or equal to 50 is a 1900 for century. Any year variable less than 50 is a 2000 century.

Some other issues Fowler suggests you consider:

  • Has anything in the business world changed drastically since the Y2K remediation was done? An example would be the financial meltdown. Fowler suggests you look at what had to be done to stabilize that whole situation and determine if that should cause a revisit of the original remediation approach? "Certain financial instruments durations have gone from 30 years to 40 and indeed even talk of 50 years," he says. "Using the above example of a 50 pivot, the 40 year instrument would start causing bad end or termination dates THIS YEAR. Calculations done using these pivots would be suspect for any thing requiring a 40+ year horizon."
  • The next issue is that each system was an island of Y2K remediation, he says - which can cause problems when you're sharing or migrating data. "Thus, unless it is full eight-digit date, you just can't assume that the date you are being passed is treated the same on each system. Nor can you assume that the eight-digit date being passed was created using similar Y2K approaches," he writes. "Data chaos can occur by applying some Y2K remediation that might be totally different between the sender/receiver or client/server. An example would be one system that was supposed to retire, so the repair was to do a windowing with a 20 pivot. After all the application was to be gone by 2010, so making it a 20 is double the amount needed."
  • Another potential problem area is when the system is now used as component services within a SOA. "Another system that uses a 50 as the pivot now wished to make use of the first system's services. Along comes 2020. Guess what happens?" he writes. "Or even just looking at this scenario today in 2010, what would happen? Any date greater than 2020 would be translated wrong between the two systems."

 

So, obviously, it's still an issue. In fact, it's arguably more relevant this year-the 10-year anniversary of the Y2K crunch - since some of the Y2K fixes actually just used a method that could expire sometime between now and 2013, according to Fowler:

A lot of the pivots I've seen are set to 50 or 51. So I guess the bulk of our problems will surface the morning of 2011 or 2012. The Mayans may not have been too far off! Let's be honest. This problem is not nearly the disaster or cost of the original Y2K bug. But it will continue to plague us with scenarios like I've painted above.

Tick, tick, tick, tick.

 

But at least the problem has gotten easier to resolve. Fowler says the Y2K remediation team uses a solution that automates the Y2K identification and repairs, which was created by a Netherlands-based company called Cornerstone Technology.

 

The smart approach is to view SOA and legacy integration in general as an opportunity to catch those kinds of hidden bugs in legacy systems. (And if you think Y2K was all hoax, talk to these banks about it. Sony can also tell you how bad bugs in the date field can be.)

 

But this issue also highlights a bigger, systemic problem for IT: How little organizations have matured in the way they deal with legacy systems. In an InfoWorld article on Y2K's 10 year anniversary, Gartner analyst Dale Vecchio says many companies ignored the big lesson: Know Thy IT Portfolio, Inside and Out. Vecchio covered the Y2K bug back in the day. He told InfoWorld:

I'd like to tell you that there were a lot of lessons learned, but I'm not sure that I've seen a lot. ... once they passed the risk of Y2K, they went back to the same lack of knowledge and now, when faced with an aging portfolio and aging workforce, they don't know any more now than they knew then. ...Baby Boomer retirements [are] bringing back a recognition that [IT shops] don't understand these application portfolios to move forward.


Add Comment      Leave a comment on this blog post

Apr 1, 2010 11:28 AM Eric Hall Eric Hall  says:

Prepare for the 2038 problem now... before it is too late!

Reply
Dec 10, 2010 10:14 AM Matt Matt  says:

the 2038 problem has already been addressed in many unix systems by changing the 32 bit timestamp to a 64 bit one

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 

Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data