Show Menu
Cheatography

Test Cheat Sheet (DRAFT) by

Helping Others

This is a draft cheat sheet. It is a work in progress and is not finished yet.

test1

• Stage source data in QVD files and then load from the QVD as this will avoid strain on the source systems and possibly network bandwidth as well as be a lot quicker, safer and more productive • Break out different data source load process into different script sections and use an Exit Script section, which can be easily moved to test each of your load processes separately • If possible, develop with a meaningful subset of data using Where clauses and/or Exists clauses in the load process to ensure you maintain relevant key matches • Avoid trying to create overly large applic­ations covering multiple use cases, it is far more efficient to create several smaller applic­ations each covering a discrete user journey • Remove synthetic keys and where possible and circular references • Remove (or comment out a better practice) all unused fields from the load • Remove or simplify time stamps (for example you don’t need 1/100th of a second so you could use the ceil function to round up to the nearest minute) or highly unique system fields • Use Limited Load in debug mode to test your logic of the script before running a full reload or use the First function to limit the load • Use Autonumber to replace text string based key fields with more efficient integers • Remove, join or concat­enate unnece­ssary snow flaked tables • Avoid using nested if statements – altern­atives are mapping tables in the load script and pick (match functions and Set Analysis with flag fields in the User Interface • Consider the use of increm­ental loads for large data sets that need to be regularly updated, this will reduce the load on the source system and speed up the overall load process
• Stage source data in QVD files and then load from the QVD as this will avoid strain on the source systems and possibly network bandwidth as well as be a lot quicker, safer and more productive • Break out different data source load process into different script sections and use an Exit Script section, which can be easily moved to test each of your load processes separately • If possible, develop with a meaningful subset of data using Where clauses and/or Exists clauses in the load process to ensure you maintain relevant key matches • Avoid trying to create overly large applic­ations covering multiple use cases, it is far more efficient to create several smaller applic­ations each covering a discrete user journey • Remove synthetic keys and where possible and circular references • Remove (or comment out a better practice) all unused fields from the load • Remove or simplify time stamps (for example you don’t need 1/100th of a second so you could use the ceil function to round up to the nearest minute) or highly unique system fields • Use Limited Load in debug mode to test your logic of the script before running a full reload or use the First function to limit the load • Use Autonumber to replace text string based key fields with more efficient integers • Remove, join or concat­enate unnece­ssary snow flaked tables • Avoid using nested if statements – altern­atives are mapping tables in the load script and pick (match functions and Set Analysis with flag fields in the User Interface • Consider the use of increm­ental loads for large data sets that need to be regularly updated, this will reduce the load on the source system and speed up the overall load process
• Stage source data in QVD files and then load from the QVD as this will avoid strain on the source systems and possibly network bandwidth as well as be a lot quicker, safer and more productive • Break out different data source load process into different script sections and use an Exit Script section, which can be easily moved to test each of your load processes separately • If possible, develop with a meaningful subset of data using Where clauses and/or Exists clauses in the load process to ensure you maintain relevant key matches • Avoid trying to create overly large applic­ations covering multiple use cases, it is far more efficient to create several smaller applic­ations each covering a discrete user journey • Remove synthetic keys and where possible and circular references • Remove (or comment out a better practice) all unused fields from the load • Remove or simplify time stamps (for example you don’t need 1/100th of a second so you could use the ceil function to round up to the nearest minute) or highly unique system fields • Use Limited Load in debug mode to test your logic of the script before running a full reload or use the First function to limit the load • Use Autonumber to replace text string based key fields with more efficient integers • Remove, join or concat­enate unnece­ssary snow flaked tables • Avoid using nested if statements – altern­atives are mapping tables in the load script and pick (match functions and Set Analysis with flag fields in the User Interface • Consider the use of increm­ental loads for large data sets that need to be regularly updated, this will reduce the load on the source system and speed up the overall load process
• Stage source data in QVD files and then load from the QVD as this will avoid strain on the source systems and possibly network bandwidth as well as be a lot quicker, safer and more productive • Break out different data source load process into different script sections and use an Exit Script section, which can be easily moved to test each of your load processes separately • If possible, develop with a meaningful subset of data using Where clauses and/or Exists clauses in the load process to ensure you maintain relevant key matches • Avoid trying to create overly large applic­ations covering multiple use cases, it is far more efficient to create several smaller applic­ations each covering a discrete user journey • Remove synthetic keys and where possible and circular references • Remove (or comment out a better practice) all unused fields from the load • Remove or simplify time stamps (for example you don’t need 1/100th of a second so you could use the ceil function to round up to the nearest minute) or highly unique system fields • Use Limited Load in debug mode to test your logic of the script before running a full reload or use the First function to limit the load • Use Autonumber to replace text string based key fields with more efficient integers • Remove, join or concat­enate unnece­ssary snow flaked tables • Avoid using nested if statements – altern­atives are mapping tables in the load script and pick (match functions and Set Analysis with flag fields in the User Interface • Consider the use of increm­ental loads for large data sets that need to be regularly updated, this will reduce the load on the source system and speed up the overall load process
\\eudv­mms­qs5­01­\dev­\10­00.D­at­a_Q­VD_­Lay­er­\1.Q­VD­\1.E­xtr­act­\QV­_QV­D_B­efore\
\\eudv­mms­qs5­01­\dev­\10­00.D­at­a_Q­VD_­Lay­er­\1.Q­VD­\1.E­xtr­act­\QV­_QV­D_B­efore\

Test 2

• Stage source data in QVD files and then load from the QVD as this will avoid strain on the source systems and possibly network bandwidth as well as be a lot quicker, safer and more productive • Break out different data source load process into different script sections and use an Exit Script section, which can be easily moved to test each of your load processes separately • If possible, develop with a meaningful subset of data using Where clauses and/or Exists clauses in the load process to ensure you maintain relevant key matches • Avoid trying to create overly large applic­ations covering multiple use cases, it is far more efficient to create several smaller applic­ations each covering a discrete user journey • Remove synthetic keys and where possible and circular references • Remove (or comment out a better practice) all unused fields from the load • Remove or simplify time stamps (for example you don’t need 1/100th of a second so you could use the ceil function to round up to the nearest minute) or highly unique system fields • Use Limited Load in debug mode to test your logic of the script before running a full reload or use the First function to limit the load • Use Autonumber to replace text string based key fields with more efficient integers • Remove, join or concat­enate unnece­ssary snow flaked tables • Avoid using nested if statements – altern­atives are mapping tables in the load script and pick (match functions and Set Analysis with flag fields in the User Interface • Consider the use of increm­ental loads for large data sets that need to be regularly updated, this will reduce the load on the source system and speed up the overall load process