Thursday 29 May 2014

Load testing webinar - RECORD | RUN | LEARN | IMPROVE

This is a bit of a promo, I’m hosting a load testing webinar next week.  If you’d like to attend, please use the following URL to register your interest:  http://myriad-it.com/load-testing-record-run-learn-improve/ 

WEBINAR DETAILS:
DATE:
Wednesday 4th June
TIME: 10 - 11am

PRESENTER:
Shannon Moir (Director, Oracle Consulting)
AGENDA:
Understanding Load Testing as a Service

> Architecture & software requirements

Load Testing demonstration

> How to record & run a JD Edwards testing script
Making the system faster
> Interpret Load Testing results
> Identify areas for improvement such as bottlenecks
> Benchmarking
> Monitoring & reporting

Tuesday 27 May 2014

unix log file analysis

If you want to get gaps in your log file that are > 14000 (for example) miliseconds than the previous line, then this awk script is for you.  This does some gnarly calcs over the JD Edwards time stamps and tells you which two lines have a gap greater than X.  I know that you can do this with “performance workbench”, but that’s not as much fun.

cat R42565* | awk -F. '{if(substr($2,0,6)-prev>14000&&prev>0&&substr($2,0,6)*1==substr($2,0,6))printf("GAP(%d)\nPREV%s\nCURR%s\n",substr($2,0,6)-prev,prevline,$0);prev=substr($2,0,6);prevline=$0}' > Gaps.txt

GAP(15992)
PREVMay 27 09:21:58.293008 - 2388/4500 MAIN_THREAD                              Try to send the message to security server AUULTDWNWAD013
CURRMay 27 09:21:58.309000 - 2388/4500 MAIN_THREAD                              Try to free sec list
GAP(31911)
PREVMay 27 09:21:58.324089 - 2388/4500 WRK:Starting jdeCallObject               METADATA - Using Environment [JPY910] for user [00160531] in sendCnvMsgToMetadataServer()
CURRMay 27 09:21:58.356000 - 2388/4500 WRK:Starting jdeCallObject               Spec Data returned from Metadata server. size = 508.
GAP(15997)
PREVMay 27 09:21:58.371003 - 2388/4500 WRK:Starting jdeCallObject               METADATA - Using Environment [JPY910] for user [00160531] in sendCnvMsgToMetadataServer()
CURRMay 27 09:21:58.387000 - 2388/4500 WRK:Starting jdeCallObject               Spec Data returned from Metadata server. size = 732.

So the output is something like the above.  My threshold was 14000 ms.  Note that this is a common issue with my clients having a consistent wait of over 14000ms on traffic.  It’s pure network too – on windows 2008R2 – as some of the traffic is DB and some is local (to it’s own metadata kernel!)

If anyone has a fix for this 15000ms wait, I’d love to know!

Wednesday 21 May 2014

Media Object File attachment is too slow

Have you ever gone to MO attachments and waited and waited for the FILE button to appear, like the below:

image

If you look at the frame source, you will work it out…  14MB in this clients case!

 

 

                <option value=14>Human Resources</option>

                                <option value=16>Manufacturing</option>

                                <option value=18>Misc Images</option>

                                <option value=20>Misc. Animations</option>

                                <option value=22>Miscellaneous</option>

                                </select>                             <input type=hidden name=queue value='0'>

                                <input type=hidden name=queuename value='BusinessFunctionDoc'>

                                </td></tr>

                                <tr><td>&nbsp;</td><td><LABEL FOR="IMAGE">File</LABEL></td>

                                <td>

                                <select name=IMAGE ID=IMAGE style="width:35em" onChange='if(document.MOImageList.preview.checked)show(options[selectedIndex].value);' >

                                <option value='\\SOP-JDEP\E910\PrintQueue\F0009_ALTERTABLE_D1130822_T123735.log' selected>F0009_ALTERTABLE_D1130822_T123735.log</option>

                                <option value='\\SOP-JDEP\E910\PrintQueue\F0009_ALTERTABLE_D1131009_T124312.log'>F0009_ALTERTABLE_D1131009_T124312.log</option>

                                <option value='\\SOP-JDEP\E910\PrintQueue\F0009_ALTERTABLE_D1131015_T164939.log'>F0009_ALTERTABLE_D1131015_T164939.log</option>

                                <option value='\\SOP-JDEP\E910\PrintQueue\F0009_ALTERTABLE_D1140128_T174906.log'>F0009_ALTERTABLE_D1140128_T174906.log</option>

 

The code above shows that it’s actually listing EVERY file that’s in the folder pointed to by the P98MOQUE entry for BusinessFunctionDoc.

In my case when I go to P98MOQUE and see where this points, it’s the deployment server printqueue folder (huh?).  But, this has lots of files.

image

So, Add AardvardAttachments as a file type attachment location and specify an empty dir, then your page is going to be faster than any other!

Friday 16 May 2014

orphan full package - delete the record

Do you go into your "Package Assembly" screen and see some packages listed that are not actively deployed?  This occurs when you rename machines or re-purpose them (dev to prod etc).

All you need to do is carefully look through the data in F96511 and delete the records that are not needed.  You'll just need to delete the invalid record where or package name is in the SKSERSHP column.

Simple reference article today - OMW tables and descriptions


OMW Log Header                                               F98210  
OMW Log Detail                                               F98211  
OMW Project Master                                           F98220  
OMW Project Descriptions                                     F98220D  
OMW Project Users                                           F98221  
OMW Project Objects                                         F98222  
OMW Allowed Actions                                         F98223  
OMW Status Activity Rules                                   F98224  
OMW Transfer Activity Rules                                 F98225  
Source Management Access Configuration                       F98226  
OMW Source Control Path Configuration                       F98227  
OMW System Settings                                         F98230  
OMW Transfer Activity Rule Director - Detail                 F98231D  
OMW Transfer Activity Rule Director - Header                 F98231H  
LCM User Generated Contents Manager                         F982400  
LCM UGC Transfer Status Mapping                             F982401  
LCM/UGC Transfer and Package Header                         F982402  
LCM/UGC Transfer and Package Detail                         F982403  


Thursday 15 May 2014

JD Edwards upgrade–make it faster

This is going to be a bit of a tease post, apologies.  Want to reduce your TC times from 50 hours to 2.5?

I’ve been recently engaged by a client to see what can be done with their upgrade window.  Everyone that has more than 1TB of data will start to push the upgrade window boundaries for their go-live weekend.  What I’m saying is, out of the box upgrading E1 with standard methodologies will take 12 –> 24+ hours with 1TB+ of data in normal circumstances.  Unless you have exadata, then it might take 15 minutes.

Remember that the data upgrade part of the upgrade process is generally only a small piece.  You still need to reconcile data in XE, then upgrade, then reconcile and then test again – this can take a lot of time.  You need to always make the technical piece more efficient to take the pressure off the other teams.

As you are aware there are 3 types of TC’s in the upgrade process (I’m talking specifically table conversions – data only update).

  • *MERGE
  • *ALTER
  • UBE

Each type of TC does what the name implies.  *MERGE is generally merging the contents of two tables into 1.  For example, F0911 and F0911T into F0911TCTEMP and then a rename back to F0911 and a heap of indexes along the way.

*ALTER is all about adding columns with default values.  ALTER TABLE X ADD COLUMN Y…  Sometimes when you are dealing with a large table (50000000 rows), this ALTER can take longer than a generation and a load (funny isn’t it!).

And UBE, my favourite.  UBE’s are generally the worst performers of all.  they are UBE, because generally they cannot do the table upgrade in any form os set based logic, so we go back to simple RAT (Row at a Time) processing.  RAT for 50 million rows!!!  No thanks.

So how can we make this process faster – or what sort of improvements can be gained with some tweaking?

I’ve been able to take the F4072 and F4074 upgrade times from 50 hours to 1.5.  That’s not all, F42119 from about 20 hours to 2.5.  This is using the same hardware and the same software (well, JD Edwards and a database), but re-engineering the scripts to use the power of the database, not just archaic slow RAT logic that is supported with the JD Edwards middleware.

I’ll start posting a little more detail about these tweaks.  So it might be that the upgrade that you thought impossible is very possible.

You must also be open to “pre-upgrading” your data.  Of course if you are a large rich multinational, you might consider goldenGate to be your silverBullet – which it could easily be.  But you could also kill this bird with a simple implementation of oracle streams or some simple triggers to pre-upgrade all of your tables and then only apply the logic to changes into the 9.1 database.  Of course for goldenGate or streams or triggers, you still need to reverse engineer all of the TC’s and apply that logic.

Friday 9 May 2014

Convert World tables to OneWorld

Introduction:

This post shows a rapid method of converting world tables into OneWorld tables. This is not for the faint of heart. If you only like doing things that are in official oracle manuals, please close this browser window and forget you visited this page. Starting printing off the hundreds of table definitions and 750 custom DD items. Then sign into E1, create a project and being typing. See you in a month!  Oh and when you make a typo on 1 DD item or order the columns wrong, good luck finding it…

If you are more adventurous (in a very nerdy kinda way), keep reading (even if it is between your fingers as your hands cover your eyes).

Note that this procedure also allows for the copy to an oracle database, this is not mandatory for it to work.

This could be one of the largest cowboys you’ve ever read about.

Tables in OneWorld need the following to be valid:

· F9860 record

· F9861 record

· DD items / columns

· F98710 record – table header

· F98711 record(s) – table detail (row for each column)

· F98712 record – key header

· F98713 record – key details

So we are going to build all of the above with SQL from the AS/400 table definitions and catalogs.

 

Step 1 Create tables:

Copy data from AS/400 to oracle. http://shannonscncjdeblog.blogspot.com.au/2014/05/using-heterogeneous-database-services.html 

Use procedure outlined in “Creating ODBC connectivity from Oracle using heterogeneous database services”.

This is done with a simple set of “CREATE TABLE AS select * FROM” statements using the database link that you establish from the AS/400.

I was able to move about 30GB of data in 4 hours, this was the contents of 110 custom tables with about 165000000 rows.

 

F98710:

I have created my custom tables in CRPDTA.

--110 table header entries

insert into dv910.f98710 select 0, trim(table_name), 0,0,0,'E910','','','',''

from all_tables t1

where t1.owner = 'CRPDTA'

and (t1.table_name like 'F55%'

or t1.table_name like 'F56%'

or t1.table_name like 'F57%'

or t1.table_name like 'F58%'

or t1.table_name like 'F59%')

;

commit;

 

F98711:

insert into dv910.f98711

select 0,table_name,0,trim(substr(column_name,3,10)),column_id,trim(column_name),0,'E910',' ',' ',' ',' '

from all_tab_columns where owner = 'CRPDTA'

and (table_name like 'F55%'

or table_name like 'F56%'

or table_name like 'F57%'

or table_name like 'F58%'

or table_name like 'F59%');

commit;

F98712:

Note that I was creating a read only environment, so I did not care for the key or the format of the key. E1 just needs a PK. I chose the first column of the table.

insert into dv910.f98712 select 0, trim(table_name), 1,'Primary Index','1','1',0,'E910','','','',''

from all_tables t1

where t1.owner = 'CRPDTA'

and (t1.table_name like 'F55%'

or t1.table_name like 'F56%'

or t1.table_name like 'F57%'

or t1.table_name like 'F58%'

or t1.table_name like 'F59%');

commit

;

F98713:

insert into dv910.f98713

select 0,trim(table_name),1,'Primary Key', 0, trim(substr(column_name,3,10)),1,'A',0,'E910',' ', ' ', ' ', ' '

from all_tab_columns where owner = 'CRPDTA'

and column_id = 1

and (table_name like 'F55%'

or table_name like 'F56%'

or table_name like 'F57%'

or table_name like 'F58%'

or table_name like 'F59%');

commit;

 

F9200:

insert into dd910.f9200

select distinct trim(substr(column_name,3,10)), '55', '55', 'D', 'NDENT', 'SQL', 114100, 'MYPC', 6000

from all_tab_columns where owner = 'CRPDTA'

and (table_name like 'F55%'

or table_name like 'F56%'

or table_name like 'F57%'

or table_name like 'F58%'

or table_name like 'F59%')

and not exists

(select 1 FROM dd910.f9200

where trim(frdtai) = trim(substr(column_name,3,10))

)

;

commit;

 

F9203:

insert into dd910.f9203

select distinct trim(substr(column_name,3,10)), ' ', ' ', 'DEfault Description', 'DEFAULT DESCRIPTION', ' '

from all_tab_columns where owner = 'CRPDTA'

and (table_name like 'F55%'

or table_name like 'F56%'

or table_name like 'F57%'

or table_name like 'F58%'

or table_name like 'F59%')

and not exists

(select 1 FROM dd910.f9203

where trim(frdtai) = trim(substr(column_name,3,10))

)

;

F9210:

This was a little harder, I could have used decode statements in my SQL, but I was getting tired. I used the inserted spreadsheet (with formulas) to build the insert statements.

Note that the DD could have been done much better if the world -> E1 DD merge had been run, this would give proper descriptions. Once again, I can update the descriptions later if I care via the database link and knowing where to find them. My issue was that the CCSID conversion was not run over the world DD, therefore all of the columns in the DD files where junk when I tried to select them.

--741 custom DD items (not in 910 DD)

select distinct substr(column_name,3,10), data_type, data_length from all_tab_columns where owner = 'CRPDTA'

and (table_name like 'F55%'

or table_name like 'F56%'

or table_name like 'F57%'

or table_name like 'F58%'

or table_name like 'F59%')

and not exists

(select 1 FROM dd910.f9210

where trim(frdtai) = trim(substr(column_name,3,10))

)

;

Formula for spreadsheet:

=+CONCATENATE("INSERT INTO DD910.F9210 values('",TRIM(MID(C837,3,6)),"',' ','S',",G837,",0,' ',0,' ',' ','",H837,"',' ',' ', ' ', ' ', ' ', ' ', ' ',0,' ',' ','SMOIRPC',114250,' ','SMOIR',9000, 'Description",ROW(),"', ",IF(D837="NUMBER",9,IF(D837="CHAR",2,11)),",4,'N','N','N',' ',' ',' ',' ',' ',0",",' ',0,' ',0,' ',0,' ','N',0,'N' ",");")

Which generates:

INSERT INTO DD910.F9210 values('$A01',' ','S',3,0,' ',0,' ',' ','',' ',' ', ' ', ' ', ' ', ' ', ' ',0,' ',' ','SMOIRPC',114250,' ','SMOIR',9000, 'Description837', 2,4,'N','N','N',' ',' ',' ',' ',' ',0,' ',0,' ',0,' ',0,' ','N',0,'N' );

INSERT INTO DD910.F9210 values('$A02',' ','S',3,0,' ',0,' ',' ','',' ',' ', ' ', ' ', ' ', ' ', ' ',0,' ',' ','SMOIRPC',114250,' ','SMOIR',9000, 'Description838', 2,4,'N','N','N',' ',' ',' ',' ',' ',0,' ',0,' ',0,' ',0,' ','N',0,'N' );

INSERT INTO DD910.F9210 values('$A03',' ','S',3,0,' ',0,' ',' ','',' ',' ', ' ', ' ', ' ', ' ', ' ',0,' ',' ','SMOIRPC',114250,' ','SMOIR',9000, 'Description839', 2,4,'N','N','N',' ',' ',' ',' ',' ',0,' ',0,' ',0,' ',0,' ','N',0,'N' );

 

F9861:

insert into ol910.f9861

select distinct trim(t1.table_name), 'NZAKLEVFN740',' ', 'SMOIR', 114122, 'E910', '7894561','1',' ', ' ', ' ', ' ', 'DV910', ' ', 'COWBOY', 'NDENT',114200,8000

from all_tables t1

where t1.owner = 'CRPDTA'

and (t1.table_name like 'F55%'

or t1.table_name like 'F56%'

or t1.table_name like 'F57%'

or t1.table_name like 'F58%'

or t1.table_name like 'F59%')

;

commit ;

 

F9860:

insert into ol910.f9860

select distinct trim(t1.table_name), trim(t2.table_text) || ' ','55','55', 'TBLE', '2', max(substr(t3.column_name,1,2)),'C',' ', ' ',' ','N',' ','N',0,' ',' ',' ',' ', ' ', ' ', ' ', 0, ' ', ' ', 'JDBTRIG', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'COWBOY', 'SMOIR', 'SMOIR', 115000, 5000

from all_tables t1, all_tab_columns t3, qsys2.systables@a01proddta t2

where t1.owner = 'CRPDTA'

and (t1.table_name like 'F55%'

or t1.table_name like 'F56%'

or t1.table_name like 'F57%'

or t1.table_name like 'F58%'

or t1.table_name like 'F59%')

and t2.table_schema = 'A73A01DTA'

and trim(t1.table_name) = trim(t2.table_name)

and trim(t2.table_name) = trim(t3.table_name)

and t3.owner = 'CRPDTA'

and not exists

(select 1 from ol910.f9860 where trim(siobnm) = trim(t2.table_name))

group by trim(t1.table_name), trim(t2.table_text) || ' ','55','55', 'TBLE', '2','C',' ', ' ',' ','N',' ','N',0,' ',' ',' ',' ', ' ', ' ', ' ', 0, ' ', ' ', 'JDBTRIG', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'COWBOY', 'SMOIR', 'SMOIR', 115000, 5000 ;

commit ;

 

Extra for experts:

Damn that 400 for starting table columns with a $ sign, you cannot do this in oracle. Of course the FIRST table of 110 that I test in E1 has a $ as the first char of the columns. This is defined in F98711 and F9860 and the table itself. So I need to change all of these quickly.

F9860

update ol910.F9860 set sipfx = replace(sipfx,'$','Z') where sifuno = 'TBLE' and SIPFX like '$%';

F98711

update dv910.f98711 set tdsqlc = replace(tdsqlc,'$','Z') where tdsqlc like '$%' ;

DDL for tables

SELECT 'ALTER TABLE CRPDTA.' || table_name || ' RENAME COLUMN "' || column_name || '" TO ' || replace(column_name,'$','Z') ||';'

from all_tab_columns t1

where t1.owner = 'CRPDTA'

and (t1.table_name like 'F55%'

or t1.table_name like 'F56%'

or t1.table_name like 'F57%'

or t1.table_name like 'F58%'

or t1.table_name like 'F59%')

and column_name like '$%';

This creates a bunch of:

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OUB06" TO ZOUB06;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP2" TO ZOSRP2;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP3" TO ZOSRP3;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP4" TO ZOSRP4;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP5" TO ZOSRP5;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP6" TO ZOSRP6;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP7" TO ZOSRP7;

ALTER TABLE CRPDTA.F554108 RENAME COLUMN "$OSRP8" TO ZOSRP8;

 

Conclusion:

In summary I can now use UTB or databrowser on any one of the 110 custom world tables. Sure some of the custom DD items have rubbish descriptions, this is cosmetic and could be fixed. The data is there and queryable in E1. This table definitions can be promoted from DV and included in package builds as valid objects.

clip_image002

clip_image004

Thursday 8 May 2014

oracle template for de-duplication dedupe

This is an often fought with concept.  You need to create a PK over a table based upon a list of columns, but it’s hard to effectively identify which row to delete, because they have the same key value.  Now please remember, I’m considering these rows to be duplicates, and that I don’t really care which one I keep and which one I delete.

My situation is quite specific, so I’ll elaborate.  One of my table conversions is falling back to RAT because of duplicates in the F4072TEM based upon the new PK that is in the new F4072.  Darn it…  I cannot afford to have this go back to RAT.  What happens when something is RAT – the dupes are just chucked away anyway….  So, I’m going get rid of them with some set based logic.

Desc the PK, and you get a list of columns:

ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY

So, use this list of columns in the template SQL below:

delete from tablename
where (rowid,COLUMN LIST) in (
select max(rowid), COLUMN LIST
from tablename
group by COLUMN LIST
having count(1) > 1);

To create

delete from proddta.f4072tem
where (rowid,ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY) in (
select max(rowid), ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY
from proddta.f4072tem
group by ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY
having count(1) > 1);

Nice, it deletes the newest record, you could use min to delete the oldest record.

Wait… Update…  The above is good if there is only every 1 other duplicate.  When there are a lot of possible duplicates that match the key, then the above will have to be run multiple times, which is not good…  unless you put some recursion on your scripts, so…

Although quite a bit slower, the following works:  thanks to http://www.dba-oracle.com/t_delete_duplicate_table_rows.htm for some help on this one.

delete from proddta.f4072tem t1
where t1.rowid >
ANY (
select
t2.rowid
from proddta.f4072tem t2
WHERE t1.ADITM = t2.ADITM
and t1.ADAST = t2.ADAST
and t1.ADAN8 = t2.ADAN8
and t1.ADIGID = t2.ADIGID
and t1.ADCGID = t2.ADCGID
and t1.ADOGID = t2.ADOGID
and t1.ADCRCD = t2.ADCRCD
and t1.ADUOM = t2.ADUOM
and t1.ADMNQ = t2.ADMNQ
and t1.ADEXDJ = t2.ADEXDJ
and t1.ADUPMJ = t2.ADUPMJ
and t1.ADTDAY = t2.ADTDAY);

And finally…  another one.  The above is way too slow, I’m not too sure that it’ll ever return, so: my final instalment:  I really must pay homage to burleson consulting, their website is gold when it comes to quick oracle tips.  http://www.dba-oracle.com/ 

delete from proddta.F4072tem where rowid in
  (
  select "rowid" from
     (select "rowid", rank_n from
         (select rank() over (partition by  ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY order by rowid) rank_n, rowid as "rowid"
             from proddta.F4072tem
             where (ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY) in
                (select ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY from proddta.F4072tem
                  group by ADITM, ADAST, ADAN8, ADIGID, ADCGID, ADOGID, ADCRCD, ADUOM, ADMNQ, ADEXDJ, ADUPMJ, ADTDAY
                  having count(*) > 1
                )
             )
         )
     where rank_n > 1
  );

excel convert columns to rows or rows to columns

This may be the best thing that I’ve discovered in the last 10 years.  How many times have I had a list of columns that I wanted to convert to rows and vise-versa?  too many…  The common problems is “SELECT * FROM F0101” which goes across the page, and I want it down the page.

The solution is simple.

my classic example is here:

Truncate table PRODDTA.F4072TEM ;

INSERT INTO PRODDTA.F4072TEM (ADAST, ADITM, ADLITM, ADAITM, ADAN8, ADICID, ADSDGR, ADSDV1, ADSDV2, ADSDV3, ADCRCD, ADUOM, ADMNQ, ADEFTJ, ADEXDJ, ADBSCD, ADLEDG, ADFRMN, ADFVTR, ADFGY, ADATID, ADURCD, ADURDT, ADURAT, ADURAB, ADURRF, ADUSER, ADPID, ADJOBN, ADUPMJ, ADTDAY, ADIGID, ADCGID, ADOGID, ADPARTFG) SELECT ADAST, ADITM, ADLITM, ADAITM, ADAN8, ADICID, ADSDGR, ADSDV1, ADSDV2, ADSDV3, ADCRCD, ADUOM, ADMNQ, ADEFTJ, ADEXDJ, ADBSCD, ADLEDG, ADFRMN, ADFVTR, ADFGY, ADATID, ADURCD, ADURDT, ADURAT, ADURAB, ADURRF, ADUSER, ADPID, ADJOBN, ADUPMJ, ADTDAY, 0, 0, 0, ' ' FROM PRODDTA.F4072

I have all of the columns from this create table statement, but I want them down the page, not across.

So, I paste the convents into a spreadsheet and use DATA –> text to columns and I use the comma as the deliminator, so I have all of the columns across the screen:

image

Then paste special

image

Oh wow, life is good!!!

image

This’ll work both ways too (said the vicar to the nun!)

Wednesday 7 May 2014

e1 TCs and run local

I’m trying to change a dodgy record that wants to run R894072 locally and not on the server. This is a real pain, as when things are run locally – it messes with all of the concurrency and queuing.  I find that when you have a mix of local and server run jobs, there is a bit more waiting around.

So I tried going into the plan and changing the 1 for run local to 0, but this would never save, what a pain.

Then I do some looking about and find P98430.

image

This allows you to update the master and then the pesky TC won’t run ion the deployment server.  That is handy!

Monday 5 May 2014

Using heterogeneous database services to connect to AS/400 from oracle.

 

Introduction:

Oracle have a great tool that allows you to treat ODBC as a database link. Note that it can only be used as a database link, not as a database that you can log into. This seems like useless information now, but it’ll make sense in a little while.

clip_image002

Why would you want to do this? Well there are about 100000 reasons, but being able to use a central SQL editor to write any SQL across any database is a pretty good start.

image

Classic situation of reading right to left:

· A client connects to the database and requests data from the AS/400 via a database link

· The database link points to a tnsnames.ora entry that MUST be right in the databases tnsnames.ora on the database server. Note that this is not on your client, it’s server to server comms. Note also that tns defines this as a heterogeneous service (HS=) in the DESCRIPTION.

· The tnsnames.ora entry above points to a listener for the gateway SID that is referenced in the database link. In my case, this is the specific listener that I’ve defined in my second oracle home for the gateway.

· The gateway listener has defined the SID in question (HS based) to use dg4odbc to serve up the data. It also defines an oracle home where the program is going to come from.

· Finally this gateway program uses it’s initDIS.ora file in the HS subdirectory to find the system based DSN for oracle to read to find the data. This is what points the oracle listener to the generic ODBC name.

· ODBC then handles the rest!

That is a lot of moving parts!

To get much quicker data transfers between AS/400 and oracle than R98403 – well, that is another totally valid reason.

Software / hardware situation:

· Existing Oracle 64 bit 11GR2 install

· Windows 2008R2

· AS/400 V5R4

· Need to move data from AS/400 to oracle

Steps:

Step1 : ODBC

You’ll need a 64bit ODBC system DSN – easy.

clip_image006

Remember this is in %windows%\system32\

clip_image008

You can set up what ever default libraries you want, when you come to use the link, you can pretty much select data from any library.

Step 2: Install gateway services

You’ll need oracle gateway services installed (not so easy), so you need an additional oracle home with additional software installed. Note that the gateway for ODBC does NOT require any additional licensing (What!! This is oracle!!!) So this additional oracle home will have a new listener, we’ll use this for the comms to the HS (heterogenous service).

clip_image010

clip_image012

D:\downloads\software\Oracle Database 11G R2 (11.2.0.3) x64 Windows\p10404530_112030_MSWIN-x86-64_1of7\gateways\stage\products.xml

I installed from a location that is similar to the above.

Note also that there is nothing to start, expect for the listener. No database instance to start.

Step 4: Create additional listener

As mentioned about, an additional listener

D:\product\11.2.0\tg_1\NETWORK\ADMIN\listener.ora

# listener.ora Network Configuration File: D:\product\11.2.0\tg_1\network\admin\listener.ora

# Generated by Oracle configuration tools.

SID_LIST_LISTENER_GW =

(SID_DESC =

(SID_NAME = A01PRODDTA)

(ORACLE_HOME = D:\product\11.2.0\tg_1)

(PROGRAM = dg4odbc)

)

LISTENER_GW =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = DBserver.com)(PORT = 1522))

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))

)

)

ADR_BASE_LISTENER_GW = D:\product\11.2.0\tg_1

Start and stop your new listener

lsnrctl start LISTENER_GW

lsnrctl stop LISTENER_GW

Step 5: Define HS instance that listener connects to

You’ll need to configure the HS data source which the listener will connect to.

initA01PRODDTA.ora

D:\product\11.2.0\tg_1\hs\admin

# This is a sample agent init file that contains the HS parameters that are

# needed for the Database Gateway for ODBC

#

# HS init parameters

#

HS_FDS_CONNECT_INFO=A01PRODDTA

HS_FDS_TRACE_LEVEL=NO

#

# Environment variables required for the non-Oracle system

#

#set <envvar>=<value>

Note that my HS INSTANCE is called A01PRODDTA

Step 6: tnsnames.ora

You’ll need to configure tnsnames on your database server, YES!! Because the database link is on the server and it needs to be right for your SQL to work (this took my slow mind some time to understand).

D:\Oracle\product\11.2.0\dbhome\NETWORK\ADMIN\tnsnames.ora

A01PRODDTA =

(DESCRIPTION =

(ADDRESS=(PROTOCOL=TCP) (HOST=NZAKLEVFN739.zeus.ghsewn.com) (PORT=1522))

(CONNECT_DATA=(SID=A01PRODDTA))(HS=))

Note that the syntax of the above is VITAL.

Step 7: Create DB link

This is simple, but must be a DB link.

clip_image014

Some errors along the way

The above was what I got wrong for many hours, and not knowing where it was going wrong. I got about 100000 of these:

D:\product\11.2.0\tg_1\hs\trace\A01PRODDTA_agt_47444.trc

Oracle Corporation --- MONDAY MAY 05 2014 12:11:48.283

Heterogeneous Agent Release

11.2.0.3.0

HS Agent diagnosed error on initial communication,

probable cause is an error in network administration

Network error 2: NCR-00002: NCR: Invalid usage

HS Gateway: NULL connection context at exit

dg4odbc ORA-28546: connection initialization failed, probable Net8 admin error

All of the above was because my tnsnames.ora was not right. This is the one that the main database uses in the home\network\admin for the database which you created the link in (not what you are pointing to with the link).

Force parallel for large DDL / DML

If you are about to run a large script, gen indexes, create tables as select, etc…  Make sure that you execute something like below before you begin.

This will mean that you don’t need to add parallel at the end of the all the scripts, because your session with have it set.  Perhaps no 32, but if you have 32 procs  - go for it!

ALTER SESSION FORCE PARALLEL DDL PARALLEL 32;
ALTER SESSION FORCE PARALLEL DML PARALLEL 32;
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 32;

Friday 2 May 2014

give me the size of data and indexes for a file in an oracle database please

I know that I’ve posted before on a similar topic.  This is slightly different.  Say you want to find the SIZE of your F0911 and the size of it’s indexes (generally more than the data I might add).  If you log in as the owner of the table (PRODDTA) for example, you can use this.

This will give the total in MB, mega bytes.

Data:

select sum(bytes)/1024/1024, 'f0911' from user_segments where segment_name = 'F0911' ;

Indexes:

select sum(bytes)/1024/1024, 'f0911index' From user_segments where segment_name like 'F0911\_%'  ESCAPE '\' and segment_type = 'INDEX' ;

The only cool this is the use of ESCAPE to escape the _, which generally indicates ANY one character in oracle.  This is how you can look for either % or _ as a literal in a string with a like statement.

White Paper of JD Edwards load testing

I’ve been involved with the creation of this white paper on load testing in JD Edwards, take a look.

http://elink.myriad-it.com/m/1/21377861/b12114-49fa7c4a-fbb2-4142-afb2-804a5c2864e5/1/161/f1d41714-4368-420a-92db-392eb662c6ef

Shows some really good use-case scenarios.

Thursday 1 May 2014

How big is your F0911? F0911 and oracle compression

This is a quick note on oracle OLTP compression.  You’ll need to buy the advanced compression option to use this type of compression, as it’s good for OLTP type applications.  The standard compression available with EE is only good for static data, therefore when your tables grow all the time – it’s not a great option.  So pony up and pay the money.

How much space are you going to save?  In my experience, you can save 89% of your disk by implementing compression on a table like F0911…  “What do you mean Shannon?” I hear you asking…  Say you had an F0911 that was 50GB (excluding indexes, approx 80 million rows), then it could come down to about 5.5GB!!!!  This is amazing.  I’ve seen large tables between 11% and 28% of their original size.

What’s that, hold off on your hardware upgrade and buy a database option pack (via Myriad) to solve all your performance problems – what a great idea. Smile (This is my first use of a emoticon in a blog post, only because I was looking for the option of removing space between paragraphs, it might be my last too).

Of course it may not all be positive.  Perhaps more CPU usage do decompress – but who cares about CPU.  Disk is king when it comes to better performance on large databases.  How are you going to ensure that this means better performance for the end user?  Well, that is simple too – get myriad to do some load testing for your with Oracle Application Testing Suite.  We could easily benchmark before and after in a isolated environment to ensure that everything will be “alright on the night”.

image

Let the chart do the talking!

Where are favourites / favorites stores in JD Edwards

They are actually stored in F9000 – as tasks.

They have a TMTASKNM of the username with an @ appended to the front of it.  So, a query like:

SELECT * FROM PRODCTL.F9000 WHERE TMTASKNM like ‘@%’ ;

is probably going to give you want you need.  This is the items that are stored in their fav are stored in the F9001.  So you use the task ID from F9000 and select what that points to in F9001 (TRPARNTTSK) and this is what is in the favs.

This is easy in SQL too:

select * from prodctl.f9000 where tmtaskid in(

select trchildtsk from prodctl.f9001 where trparnttsk in (

select tmtaskid from prodctl.f9000 where tmlngtask like '@%' and tmtasknm like '@%'))

Armed with the above, you could remove certain favourites, or find the missing ones – all within your grasp. 

They also seem to have a TMTASKTYPE of 07, so you can probably use that also.