I am testing the migration of 1 years customer demographic and activity data from one database to another.

Random sampling of pre and post migration data seems to be the accepted wisdom when undertaking this type of testing. I intend to apply risk based techniques, and sample randomly per logical partition, with the highest percentage of testing in the critical partitions.

I have scripting and SQL experience, but no automated testing scripts will be used in this Test Strategy.

Before I get started, I though I'd check if you have any suggestions or similar experience that my be of use?

Thank you,
Helen Doyle

Views: 2671

Reply to This

Replies to This Discussion

I am at work so cant post more. But this paper by Martin Fowler may be useful. http://martinfowler.com/articles/evodb.html
How did the data migration go? Was it a "straight copy" from one DB to another, without any schema/record changes? If so, then your approach sounds okay - a little sanity check is probably sufficient...

However if, as in many cases, there was some kind of data transformation/reformatting done to shoe-horn the legacy data into the new system, then you're gonna have to do some investigation:

Check with the folks who did the data migration - were there any problems? If so, then zero in on what they were. How rigorous/defensive were the script writers? You might want to review the scripts to see if they might have caused some problems that are not readily apparent. Did the scripts log any errors that might have been found pulling data out of the old system, or inserting data into the new? How rigorous were the migration scripts tested before applying them to the production data?

Are there any changes to constraints placed on records/fields? Things like numeric ranges, character string lengths, etc.
Anything that might be different is a candidate for scrutiny.
We tried to keep the schema changes to a minimum, but there were changes, and definitely some "shoe horning".

The migration scripter reported good results, but I will request direct access to the scripts and logs to get a better feel for things.

Thanks for your input.
Failures can happen at a few places. Test the data is not corrupted, the schemas are correct and the functionality is preserved.

Try some simple use cases on the new db

1) Set up tests.. Connecting to new db server, creating a new db, create tables and rules, populate data, then compile the stored procs and triggers.

2) Follow up tests.. Connect the db, execute SQL stored procs and triggers. Disconnect from the db.

Look for errors like

Invalid passwords, usernames, users having wrong privileges to create tables ect.

Resource contention stopping some stored procs or tables being created,

There is not enough room to create tables or dbs. This messes up often so check all tables you expected to be created during setup are created.

Or there is enough room at first but when you add something then it fills up too quickly.

Check database is optimised properly and working at least a bit efficiently Some dbs dont even bother indexing properly and this creates loads of problems later on.

Check Case sensitivity is preserved

Create test cases to cover all of the possible transactions the db does. Map these to the expected actions on affected tables and document the preconditions.

How are commas and single quotes handled

How are invisible character handled.

What happens when timeout occurs, does it take as long as expected

Try passing NULL to fild that doesnt accept it.

What happens when a string is too long or a value is too large for the field

When compiling the SQL procs check you are using the same complier as the source db because that can cause sneaky errors.

Have a plan to deal the data added to the orginal db while you are migrating and testing

Thats all I can think of for now. The list is partly based on Testing Applications on the Web isbn 0-471-20100-6, which is a guide i use.

Ric
Thanks Ric - that is exactly the kind of detail I need to get to night now.
I'll check out the isbn 0-471-20100-6 standard too.
Don't forget localisation, UTF8 and UTF16.
Hi Helen,

Hope your Data Migration Prj is going fine.. I am a research student who is with particular interest in Data Migration in ERP systems and Information systems. It is noted that Data Migration is a challenging task with ERP and in fact it seems like in any migration attempt it is.

I believe that Automation of this task is the key however I need more input with practical users like you.

I look forward to you support as time permits.

Regards,

Mark
Mark, yes, automation is a necessary key to migrating data but you generally can't simply write some scripts and push the button. In every data migration effort I've heard about, there were plenty of instances where the data didn't fit nicely into the new environmet. There are many reasons for this - the new schema doesn't permit a nice transformation, the original rules weren't followed, legacy S/W bugs caused unexpected relationships - you name it.

Generally speaking, data migration is a dirty nasty job...
One word, "indexing." That little bugaboo has been the downfall of many data migration testing efforts I've attempted. If the tables were designed with independent GUID's then you should be somewhat safe. If, however, they were indexed using something like row number of composite ID's then watch out.
Thanks Guys... for your input,

However what remains as a question is how is ERP vendors do it? it is a challenge for them too given the circumstances. Do they take a mirror of the existing and validate it later for inconsistencies and fix the bugs? or any other means?
One more thing...

All this DM projects in any system would be a pain. However if consider a hypothetically :
"think you have two cupboards one with latest style and other your old. Your new cupboard has different compartments and safety measures and wot not.... So how do we move the stuff that was in the old cupboard to new... with the same identifications of shelfs and items that you are use to. Obviously you cannot have the same style cos your new cupboard is of a new standard... What do you do?
I will move my old content in to a temporary place keeping track of identifications and maps then I would first organise them in to a format that I could recognise in the new cupboard. then when moving to the new shelf I would verify if I am moving the right thing to the right place? once the task is complete you could re-check the new cupboard items could be accessible and if any items are left out in the temporary location."


So now if we are applying this to the DM work.
1. we need to have a analyser that accurately identify the schemas and their differences.
2. Create a backup of the old.
3. Identify indexes and PH, FK, Relationships and Use an algorithm to temporary link the items in the Old to match to schema in the New.
4. Run Automated validation on the data while it is transferred in comparison to both New and Old
5. Run a verification with Temporary, Old and New schema
6. Then probably a full regression with the new DB and SW that uses it.

Just Value your great input.
A very naive view - if only it was that easy....

RSS

Adverts

Ministry of Testing

© 2014   Created by Rosie Sherry.

Badges  |  Report an Issue  |  Terms of Service