Wednesday 29 March 2017

Use case for ERP analytics #101–tracking down inappropriate activity

A client was reporting to me that there were some orders being placed in production from one of the testing users.  They asked me if I could tack down where this was coming from.  Traditional answer – No – I have no idea. 

This client is luck enough to have ERP analytics installed at their site.  I was able to drill a little deeper.

The first thing I did is create a custom report for tracking down user data:

image

You can see from the above that I have region, city, network domain and more for this session.

The first tab I start with

image

Find user.

So now I can look for the user, I could drill down on environment too, which means that I only get prod data

Step 1 – find my user

image

Step 2 look at environments

image

Step 3, click on PD920

image

Now I see what has been going on!

I choose my language and region tab and I get the city and region information from the client

image

Bosch, I have loads of information about the perp – give this to the client to see what is going on!

I can create another tab to see what applications they were using

image

Crime solved – what’s next ERP analytics?

Tuesday 28 March 2017

rescuing E1local from a complete reinstall

We’ve all had it, after a package install, you cannot connect to e1local!  Arrgghh!

Some clients get it more than others, it seems that virus scanning and other specifics about the client (taking VM snaps) are killing all of the oracle databases at once.  They seem to be unrecoverable.

C:\Oracle\diag\rdbms\e1local\e1local\alert\log.xml

ocal_ora_98068.trc:
ORA-01113: file 5 needs media recovery
ORA-01110: data file 5: 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF'
</txt>
</msg>
<msg time='2017-0

And jde.log

74644/74648 MAIN_THREAD                           Thu Mar 23 15:36:51.370000    jdb_ctl.c4199
    Starting OneWorld

74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.529000    dbinitcn.c929
    OCI0000065 - Unable to create user session to database server

74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530000    dbinitcn.c934
    OCI0000141 - Error - ORA-01033: ORACLE initialization or shutdown in progress
 
74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530001    dbinitcn.c542
    OCI0000367 - Unable to connect to Oracle ORA-01033: ORACLE initialization or shutdown in progress
 
74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530002    jdb_drvm.c794
    JDB9900164 - Failed to connect to E1Local

This is painful.

Make sure that the current OS user is a member of the highlighted group below

image

Make sure you are using the server based SQLPlus after changing the security in sqlnet.ora to be NTS (see previous post)

C:\Oracle\E1Local\BIN\sqlplus.exe
C:\Oraclient\product\12.1.0\client_1\BIN\sqlplus.exe

C:\Windows\system32>sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 23 15:39:25 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions

SQL> shutdown
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  3050800 bytes
Variable Size             381682384 bytes
Database Buffers          415236096 bytes
Redo Buffers                5337088 bytes
Database mounted.
ORA-01113: file 5 needs media recovery
ORA-01110: data file 5: 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF'

SQL> alter database datafile 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF' offline drop ;

Database altered.

Then impdp your datafile, you need a user to be able to do this, I created jdeupg

sqlplus / as sysdba

SQL> create user jdeupg identified by myP@55# ;

User created.

SQL> grant dba to jdeupg
  2  ;

Grant succeeded.

SQL> quit

Then at the command line

C:\Windows\system32>impdp jdeupg/myP@55# TRANSPORT_DATAFILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV701200
0 LOGFILE='impspec_dv7012000.log'

now, you need to be careful about the database directory and how this was last set.  It’ll be set to the location of the last full package.  You need to set this for the file that you are trying to rescue, in my case DV920\spec

select * form all_directories:

ORACLE_HOME    /
ORACLE_BASE    /
OPATCH_LOG_DIR    C:\Oracle\E1Local\QOpatch
OPATCH_SCRIPT_DIR    C:\Oracle\E1Local\QOpatch
OPATCH_INST_DIR    C:\Oracle\E1Local\OPatch
DATA_PUMP_DIR    C:\Oracle/admin/e1local/dpdump/
XSDDIR    C:\Oracle\E1Local\rdbms\xml\schema
XMLDIR    C:\Oracle\E1Local\rdbms\xml
ORACLE_OCM_CONFIG_DIR    C:\Oracle\E1Local/ccr/state
ORACLE_OCM_CONFIG_DIR2    C:\Oracle\E1Local/ccr/state
PKGDIR    C:\E920\UA920\data\

drop tablespace SPEC_DV7012000 including contents, when in sqlplus:

select * from all_directories ;
drop directory PKGDIR;
create  directory PKGDIR as 'C:\E920\DV910\spec ;

Note that you might need to delete the previous imp and exp log files too, I was getting:  These are .logs in the directory (DV920\spec) that you are importing to.

C:\E920\DV920\spec>impdp jdeupg/PAss# TRANSPORT_DATAFILES='C:\E920\DV920\spe
c\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLE
SPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV7012000
LOGFILE='impspec_dv7012000.log'

Import: Release 12.1.0.2.0 - Production on Thu Mar 23 16:21:36 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation

Finally I have my ducks lined up, time to run the import

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29349: tablespace 'SPEC_DV7012000' already exists

Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Thu Mar
23 16:22:07 2017 elapsed 0 00:00:10

Crappo, need to drop this

sqlplus / as sysdba

drop tablespace SPEC_DV7012000 including contents ;

Go again:

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-19721: Cannot find datafile with absolute file number 13 in tablespace SPEC_
DV7012000

Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Thu Mar
23 16:26:29 2017 elapsed 0 00:00:03

Damn, this means that my file is truly is corrupt.  Right, grab a fresh one from the deployment server spec directory.

C:\E920\DV920\spec>impdp jdeupg/Pass# TRANSPORT_DATAFILES='C:\E920\DV920\spe
c\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLE
SPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV7012000
LOGFILE='impspec_dv7012000.log'

Import: Release 12.1.0.2.0 - Production on Thu Mar 23 16:29:31 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
Master table "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Thu Mar 23
16:30:15 2017 elapsed 0 00:00:43

Working, and I can log into JDE.  What a saga.  But, now that I have the knowledge, this is going to save me time going forward.

Thursday 23 March 2017

Go-live infographic

Can you give your managers some insights into a go-live like this?

ERP analytics can give you some very unique insights into what is going on behind the scenes of an upgrade!

new-piktochart-_21210816_a8781355fce86e3561a1e98a3e15a7919801caab

ERP analytics use case #99–the upgrade

I’ve been working on a significant upgrade over the last couple of months and we pulled the trigger over the weekend.  Things have been going okay so far (I’m always very conservative when doing upgrades).  We’ve not had any downtime and for the main part things are working.  This is an amazing result and testament to the stability of the code coming out of JD Edwards and also the great testing from the client.

Anyway, to my point.

We had an interesting scenario where users could not use P512000 in 9.2.  I could not believe it, how could this be right.  I looked at the security records between the two releases and they were solid (the same). So I’m a bit too black and white and say “they must not have run it ever”…

I then get to google analytics for their 910 environment to see:

image

I choose security analysis, as this is the core of what I’m checking

drill down to security

image

Choose the environment

image

Note that I’m using the last 3 months of data, actually looking into over 1.5 Million page views

I search for P512000

image

I din that it’s been used – wow security was right.

And then I see the 50 users that have loaded that application in the last 3 months.  I can see when they loaded it (time of day, day of month) and also how long it took to load.

ERP analytics is making my job easier!

By the way, it turns out that the application loads P512000A when it loads…  I needed to add security for that!

Monday 20 March 2017

Thick client installation not finishing–38% complete

Go lives provide a lot of fodder for my blog.

Lucky me is currently working through a plethora of errors and now we are working on my favourite, client install problems!

I have a issue where the installer hangs on 38% of a thick client.

I can see from the installer log (C:\Program Files (x86)\Oracle\Inventory\logs) that it’s hanging on the following process:

INFO: Username:jdeupg

INFO: 03/20/17 10:53:56.422 SetDirectoryPermissions = "C:\Windows\SysWOW64\icacls.exe" "C:\E920" /inheritance:e /grant "e1local":(OI)(CI)F /t

I found a great article on MOS, that you need to change the following parameter for the oracle installer:

D:\JDEdwards\E920\OneWorld Client Install\install\oraparam.ini

JRE_MEMORY_OPTIONS=" -mx256m"

Once this was done, the installer finished without any problem.

This did occur on a fat client that already had 4 pathcodes installed, so the pressure was on.

When I cancelled the installer, I needed to change the package.inf file.  What occurs if you do not do this change is the following.

clip_image001

Note that when you bomb out the installer at this point, it leaves the following fields blank in the package.inf file (c:\E910). 

You need to ensure that these two values in your local package.inf (c:\E920) are correct

SystemBuildType=RELEASE

FoundationBuildDate=SAT DEC 03 14:47:50 2016

Note that the second field must match the date from the package_build.inf file, this is from the \\depserver\e920\package_inf dir

[Attributes]
AllPathCodes=N
PackageName=PD7031900
PathCode=PD920
Built=Build Completed Successfully
PackageType=FULL
SPEC FORMAT=XML
Release=E920
BaseRelease=B9
SystemBuildType=RELEASE
ServicePack=9.2.01.00.07
MFCVersion=6
SpecFilesAvailable=Y
DelGlblTbl=Y
ReplaceIni=Y
AppBuildDate=Sun Mar 19 09:37:20 2017
FoundationBuildDate=Sat Dec 03 14:47:50 2016

Such a pain!

JD Edwards go-live–I want to see my old Jobs

This is a common problem and one that you really need to prepare for.  Offer the service up and your users are going to love it.  As it happens when you “go-live” you generally change your SVM datasource, which means the F986110 and F986114 etc are going to be missing their old records.  For example, I’ve just done a 910 to 920 go-live, now my users in 920 cannot see their old PDF’s and csv’s.

Right, so what can I do.

A quick squizz at the old printqueue (/E910SYS/PRINTQUEUE) (yeah, IFS = AS/400) tells me that there are only 750000 PDFs…  What! plus the logs and the csv’s and more.  No way, about 1000000 files.  No wonder it was hard to browse the IFS.

Okay, I don’t want to move the million files, selective mv’s get’s errors in STRQSH, man that is the crappiest interface EVER!!!  It’s like the worlds worst beta ever. 

image

Sure I like being able to run unix commands on the green screen – but that interface!!!  wow, crap.  Does anyone know if I can ssh to the box without a 5250?

honestly, it’s sooooo terrible.

you never know if a command has hung or whether it’s complete.

Anyway, stop complaining…

So, there are too many files to run any commands – cannot use find with exec, but the interface is so bad, it makes everything worse.

> ls /E910SYS/*                                          
  qsh: 001-0085 Too many arguments specified on command. 
         0                                               
  $
                                                      

I get the above no matter what I try (after 5 minutes of course)

I finally decide to go rouge on this problem.

I run the following SQL:

--this is slightly unrelated, but I copy over all of the records from svm910 so that I can see them in WSJ for retrieval.  Nice, now I just need to put the files where JDE expects.  Note that I’m still using the filesystem for my PrintQueue.

insert into svm920.f986110 (select * from svm910.f986110) ;

--I now build the mv commands and pop them into a .sh file for execution through STRQSH:

select 'mv /E910SYS/PRINTQUEUE/' || trim(JCFNDFUF2) || '_' || JCJOBNBR || '_PDF.pdf /E920SYS/PRINTQUEUE'
from svm920.f986110 where jcsbmdate > 117070 and jcenhv like '%910%';

This generates a pile of these (50K), I’ll do a couple of days at a time!

mv /E910SYS/PRINTQUEUE/R5642005_SIM001_13368050_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5747011_SIM001_13368051_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5543500_SIM002_13368052_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5641004_SIM901_13368053_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5642023_SIM902_13368054_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5531410_TOP008_13368055_PDF.pdf /E920SYS/PRINTQUEUE         

Once I have my 50000 lines, I create a copyPDF.sh in a small IFS dir and paste in the contents of the above.

I then chmod 755 this file and run it through STRQSH.

Bosch!  I have 50000 PDF’s copied over to the E920 location so that my users can see their historical data.

Cache debugging–JDBj Service cache vs. Database Cache

We all love cache, as we think that it makes things quicker.  Cache is the concept of moving things from the single source of truth in order to improve performance.  Cache has an inherent problem of distancing itself from the single source of truth and therefore causing locking and concurrency issues.

JD Edwards has a number of difference caches.  You need to do lots of digging to find out what cache you are dealing with.  Data for the main part has two caches.  JDBj service cache and database cache (kernels).

JDBj cache, this is easy.  Available to clear in SM.  If java code is running, it will look here for cached values.

The JDBj service cache is the important one, this is where the data is.  Refer to the table below to see what tables are included in the JDBj Service Cache.

image

The service cache (JDBj)


Table Table Name/Description Others
F0004 User Defined Code Types & Database Cache
F0005 User Defined Codes & Database Cache
F0005D User Defined Codes - Alternative Language Only Service Cache (Not Database Cache)
F0010 Company Constants & Database Cache
F0013 Currency Codes & Database Cache
F0025 Ledger Type Master File & Database Cache
F0092 Library List - User Only Service Cache (Not Database Cache)
F00941 Environment Detail - OneWorld Only Service Cache (Not Database Cache)
F0111 Address Book - Who's Who Only Service Cache (Not Database Cache)
F9500001 CFR Configuration Table Only Service Cache (Not Database Cache)
F95921 Role Relationship Table Only Service Cache (Not Database Cache)
F9861 Object Librarian - Status Detail Only Service Cache (Not Database Cache)

Wouldn’t it be nice to be able to drill down into this cache and see the values?  Well, nice for a nerd like me…  Perhaps not everyone wants to see this.

Database Cache

This is the kernel cache.  This seems to be in shared memory for all kernels of a logic data source, so all kernels refer to the same values in cache.  If these are updated, then everyone on the server gets to enjoy this.  Note that UBE’s create their own cache at the time of initialisation.

P98613 (Work With Database Caching) application will list all tables cached within your own environment because the tables defined can vary between EnterpriseOne versions and Tools Releases. Below is a basic list of tables where data are cached.

Note that P986116D can also help you clear a table at a time, but this is kernel cache (BSFN cache).  This is not going to affect JAS!

Clear all database cache – P986116D – advanced

image

See the tables being cached

image

Clear a table at a time

image

run P986116D W986116DA in fast path

image

Choose the table that you want to clear the cache for.

 

The Dilemma

There are many scenarios that the cache clears need to be coordinated.  For example, if you are in P0010 and you change the period of a company, he new functionality in 9.2.X will do the kernel cache clear – but guess what, it does not do the JDBj cache.

So…  When you go to enter a journal, you get a period error – because the JDBj service cache does not refresh automatically.  There is a bug currently due for fix in 9.2.1.1 Bug 24929695 : ISSUE WITH JDB_CLEARTABLECACHE which seems to indicate that there is going to be a link from JD Edwards runtime back to SM to enable the JDBj service cache to be cleared from JDE.  This would be nice.  It seems that the called clearTableCacheByEnvironmentMessage, but when I search my 9.2.1.0 source code and system/include – I don’t see any references to this.

I’m guessing that there is going to be a service entry or perhaps a PO that will define the SM URL and port so that automated cache clear can be triggered.  They (oracle) might also use the AIS / rest based interface to SM to enable this functionality.

 

 

Friday 17 March 2017

iASP, JDE, DBDR

This is going to get limited interest I know.

Wow, I’m finding that AS/400’s are getting more and more painful for supporting with JD Edwards.  An idea one of my learned colleagues just had was that we should charge AS/400 clients an additional 50% on their rates for the frustration and premature aging we get from managing their kit!  Ha…  Sorry Nigel.

For instance, how about this.

image

Simple datasource definition using JD Edwards.

I want to do a quick test using another AS/400 for production.  You might ask why?  I’ve build my PD920 environment early (before go-live weekend).  I’ve merged UDO’s, done everything except for data and control table merges.  So, I’m going to test the entire environment with a full set of converted data and control tables to ensure things are peachy!  This is great to have done before the go-live weekend.

I quickly change the Machine Name above and the library and think I’m going to rely on DBDR to do the rest.  Simple! 

restart services, most things work (I say most) – UBE’s don’t.

UBE logs have

1267374           Fri Mar 17 09:56:09.280520      dbdrv_log.c196

            OS400QL001 - ConnectDB:Unable to connect to DS 'Business Data - PROD' in DB 'Business Data - PROD' on Server DB 'CHIJTD41' with RDB Name 'CHIJPD61' via 'T' with Commitment 'N'. QCPFMSG   *LIBL      - CPFB752 - Internal error in &2 API

1267374           Fri Mar 17 09:56:09.280888      dbdrv_log.c196

            OS400RV007x - DBInitConnection:PerformConnection failed

1267374           Fri Mar 17 09:56:09.280968      jdb_drvm.c794

            JDB9900164 - Failed to connect to Business Data - PROD

1267374           Fri Mar 17 09:56:09.281024      jtp_cm.c282

            JDB9909003 - Could not init connect.

Oh man!!!  AS/400 – why do you do this to me???

Then another learned colleague tells me, “did you use JDE to change the data source information?  You can’t do that you need to use SQL and change the OMLL field in F98611 to reference the correct iASP for the query….”  HUH? Oh – of course!  Why didn’t I think of that.

select * from svm920.f98611;
select omdatp, omll from svm920.f98611;
update svm920.f98611 set omll = 'CHIJTD41' where omdatp in ('Business Data - PDT', 'Business Data - PROD', 'Control Tables - PDT', 'Control Tables - Prod');

Great, another restart of services and we are running again, and now UBE’s are processing.

Tuesday 14 March 2017

cookie monster user agent compatibility mode and more–Load Balancer problems.

I’ve having issues with some cookies, specifically JSESSIONID.  Specifically when using ie11 and also specifically when I used a proxy server.  Yes – the combination of all of these are causing me to have immediate “your session has expired” messages from JDE.  This is a classic “load balancer not using JSESSIONID to manage web server affinity.  We’ve all seen it 20 times before.

  • The problem is that chrome works with the proxy
  • The problem is that ie11 works without the proxy.

So where is my problem?

Let’s start with some basics.

The JDE login page needs to make 295 calls back and forth.  This sends 172KB of traffic and receives 3.97MB.  Did you know that?  Cool stat!

The web server set’s a cookie almost immediately:

Key    Value
Set-Cookie    JSESSIONID=SoLLe2MGnfzecDkSIxm56qv8pf2a0SXrmjeRTFPGw84Xo8k89XBK!1342963082; path=/; HttpOnly

This expires at the end of the session.  Easy!

image

Note that this is used for all of the exchanges of gif’s and css’s.

Now, we try and login.

image

The first time we do a MAF call, we have the cookie set on us to a different value

URL    Protocol    Method    Result    Type    Received    Taken    Initiator    Wait‎‎    Start‎‎    Request‎‎    Response‎‎    Cache read‎‎    Gap‎‎
/jde/ResourceCanonicalsJS.mafService?e1UserActInfo=false&e1.mode=view&e1.namespace=&RENDER_MAFLET=E1Menu&e1.state=maximized&e1.service=ResourceCanonicalsJS    HTTP    GET    200    application/x-javascript    67.28 KB    78 ms    <script>    1404    0    31    47    0    234

We send:

Key    Value
Cookie    _ga=GA1.1.1246774349.1489119748; _gat=1; JSESSIONID=u1jLgQJ7eBiRq9_cBQ3KPSUN71R2OmOJtgWRcpySaAWKc6gxlJrU!1830213081; e1AppState=

but, the server sends us:

Key    Value
Set-Cookie    JSESSIONID=eTnLgQfKjk4NyAAkLYm8XvOij6Oax5WNqwM0zgPaYhe78J5j_Rt1!1342963082; path=/; HttpOnly

So the browser can only use the new value in the next request (but we’ve flunked our session, as the cookie originally passed was invalid)

The interesting thing is, that when I remove the proxy, the JSESSIONID is being sent all of the time!  Every request for a gif or png has the cookie in the request header.  So the proxy or browser is ripping this out very early!

The reply from the server is pretty simple – I do not know you here is a new JSESSIONID if you want to try again.

{"notificationReply": {"idle":0, "timedout":true, "logoutURL":"/jde/MafletClose.mafService?action=close&e1UserActInfo=false&e1.mode=view&jdemafjasFrom=SessionTimeout&e1.namespace=&RENDER_MAFLET=E1Menu&e1.state=maximized&e1.service=MafletClose"}}

See that we have an appstate and now 2 session ID’s!

Direction    Key    Value    Expires    Domain    Path    Secure    HTTP only
Sent    JSESSIONID    fwPLmXbOOv9FTQ1mY3cm_BFUAg3C5UuAfWmzDpK4T2as7ovnsVe4!1830213081                   
Sent    e1AppState    E1MENUMAIN_5531488967363928064:E1MENUMAIN_5531488967363928064|                   
Received    JSESSIONID    tgHLmXbO-vkMOxZmJr-1vfxgdeg7eJaq5JLxuYqLz5ljXrnUeEjn!1342963082    At end of session        /    No    Yes

The client is doing a post of the following content

    • Request headers:
    • Key  Value
    • Request    POST /jde/NotificationController.mafService HTTP/1.1
    • Accept     */*
    • Referer     http://jde92.xxxxx/jde/js/notificationWorker.js
    • Accept-Language en-AU
    • Content-Type    text/plain;charset=UTF-8
    • Accept-Encoding gzip, deflate
    • User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
    • Host jde92.xxxxx
    • Content-Length  5
    • Connection Keep-Alive
    • Cache-Control   no-cache
    • Cookie     _ga=GA1.1.1246774349.1489119748; JSESSIONID=FOLMOkDx9cu0Trx4oC7Joy24sGBo8pc-bdFde-mCP5vcDdCVrLY4!1342963082; _gat=1; e1AppState=E1MENUMAIN_3349205142792595456:E1MENUMAIN_3349205142792595456|
    • Sent payload
    • cmd=0
    • cmd=0

The response from this request is the Set-cookie

Key    Value
Response    HTTP/1.1 200 OK
Date    Tue, 14 Mar 2017 07:32:29 GMT
Content-Type    text/html; charset=UTF-8
Set-Cookie    JSESSIONID=YVbLu4qUOF1x4Fesurjkq-Fb-N54fgjN-sE0biUYng3md9cVHl5H!1830213081; path=/; HttpOnly
Content-Length    208

When chrome sends this – there is no reply with an alternate cookie in the header.

Wow, this is tough..  When you only have the client end of the comms – this is difficult to solve.

We got a LB consultant in who was able to hone in on LB side of things, and indeed was able to solve the problem. This is a complicated LB which actually detects the user agent and sends people to the appropriate JVM (because of activeX – if you know much about JDE you’ll understand this).   This is a nice fix and some cool functionality to put into the LB.

The browser was sending Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko which is wrong.

What was happening was that the rules around IE vs. other browsers were being triggered by certain payloads (so the browser emulation went generic for some posts), which then went to a different LB member and everything broke.  Wow!

What the consultant found was that JDE implemented this on some of their pages.

<meta http-equiv="X-UA-Compatible" content="IE=edge">

This client has “display intranet sites in compatibility mode”.  Which meant that JDE was being displayed in compatibility mode. What this does is it overrides the "Compatibility View" settings on IE and forces the it to use IE11 (as opposed to IE7 which is the "Compatibility View" default). This explains why I was seeing the IE11 user agent on some requests, and the IE7 user agent on others.

Note that this causes the browser to disregard the payload and request it again using the compatibility settings.  This means that accessing JDE in IE11 using compatibility mode and a fancy LB is going to break things every time.  You need to be careful what headers you are going to use when directing traffic.

You can fix this with the LB rules and search for some different options, not just IE11 – search for the agnet rv:11 for example.

Lessons, shift ctrl I in chrome is awesome, F12 in iexplore is awesome.  Browsers, proxies, compatibility mode and LB’s are complex beasts.

Friday 3 March 2017

Grid formats and upgrades–woe is me! How to ensure all grids make it to 9.2

I want my users to be productive, I want them to be able to use their familiar environment.

I know that there has been a lot of change in the later releases, especially with UDO’s – I’ve documented a bit of this in some posts.  I’m going to deal quite specifically with F98950 and F952440 – as F952440 is the new F98950.  When you are on a new tools release, the user overrides are read from F952440 instead of F98950 – nice and easy.  But you need to get them there!

There is a defined method, that is calling R89952440 Table conversion for grid format – simple! You reckon?

How do I tell it where it’s going to get F98950 from?

It’s a TC, so you specify input and output environments, so input might be UA910 and output might be UA920 – that makes sense!

image

So with that explained, let’s look a little deeper, as what I’ve found (and what others have told me) – is that if you have records in F9860W or F9861W for Grid formats, the process breaks down!

Firstly, your roles need to exist (and users) for this to work.  Look at this code without error handling:

image

So now we get to the F9860W logic

image

If there is a record that matches on description, webObjectType, Object, Form and version…

But this is the 60, not the 61W.  It seems to me that if ANY environment has this value the GD won’t come forward.

My brain cannot handle the logic in this UBE, but let me give your the facts.  If I leave F9860W and F9861W alone and run the conversion only 7900ish Grid formats come over.  Aarrgghhh!!!

If I clean out the F9860W and F9861W and then run it, more than 12000 come over!  You be the judge. Even if I remove the records for PP920 only in F9861W – I do not get all of my Grids come over.  This area needs some work.

Here is what you need to do if you want all of your grids to come over for an upgrade:

AS/400 syntax – sorry!

first backup ol920 to a *savf

-- these are the statements that do the actual work

create table ol920.f9860ws as (select * from ol920.f9860w where wowotyp='FORMAT') with data;

delete from ol920.f9861w where sipathcd = 'PP920' and siwobnm in (select wowobnm from ol920.f9860w where wowotyp='FORMAT');

delete from ol920.f9860w where wowotyp='FORMAT';

--so now you have no formats in the F9860W or F9861W – this is fine as these tables are all about transfers not runtime!

--run conversion.  create your own version of R89952440 from PLANNER.  For this TC, From is F98950 location, TO is F952440 location.  make sure that the F98950 is also in the PP920 library if you are using too and from as the same libraries. 

-- Now, patch your F9860W insert any missing formats from other environments.

insert into ol920.f9860w select * from ol920.f9860ws t2 where not exists (select 1 from ol920.f9860w t3 where t2.wowobnm = t3.wowobnm and t3.WOWOTYP = 'FORMAT') ;

Job done!  Time to reconcile with this little bad boy (Thanks to Shae again)

select a.* from copp920.f98950 a left outer join copp920.f952440 b on
  (UOUSER=GFWOUSER and UOOBNM=GFOBNM and UOVERS=GFVERS and UOFMNM = GFFMNM and
  UOSEQ = GFSEQ)
  where GFOBNM is null and UOUOTY in ('GD')

42 did not make it, which matches the debug log EXACTLY!

Opening UBE Log for report R89952440, version SIMPP920

TCEngine Level 1 ..\..\common\TCEngine\tcinit.c : Input F98950 is using data source Central Objects - PP920.

TCEngine Level 1 ..\..\common\TCEngine\tcinit.c : Output F952440 is using data source Central Objects - PP920.

TCEngine Level 1 ..\..\common\TCEngine\tcinit.c : Conversion method is Row by Row.

TCEngine Level 1 ..\..\common\TCEngine\tcdump.c :

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 0 ..\..\common\TCEngine\tcrun.c : TCE009143 - Insert Row failed for table F952440.

TCEngine Level 1 ..\..\common\TCEngine\tcrun.c : Conversion R89952440 SIMPP920 done successfully. Elapsed time - 301.060000 Seconds.

UBE Job Finished SuccessFully.

Thursday 2 March 2017

Create a custom tools release with POC’s included (Thanks Shae!!)

 

This has been nicked – verbatim from Shae’s blog, but it’s so very good, I needed to share with you.  This is a great way of ensuring that POC’s are sticky and done properly!

Grab your original PAR file and then extract it (using 7-Zip or whatever, it is just a zip file)

Should be something like this
image

The webclient.ear file is also a zip, just open it with z-zip

Webclient.war is also a zip, expand it out in zip till you find where your POC goes, for this one its D:\downloads\9.2.1.0-HTML-Server_06_70_Simplot\webclient.ear\webclient.war\WEB-INF\lib\

I renamed the original JAR and put in the POCed JAR
image

Save it as required, then go back to the base and find the scf-manifest.xml file. Again copy this (to be sure) and then edit this file.

Update the description with whatever required, in this case Added Cafe1 POC

Save and exit

Highlight everything in the folder and zip it back up with 7-ZIP.

image

image

Copy the zip file out, rename it to a .par file and another name

image

Copy this up to the components folder of the SM and distribute and apply as any normal tools.