Wednesday 6 September 2017

Bulk version change tips and tricks

Ever needed to create a lot of versions as a copy of others?  Ever needed to create versions and also change data selections and processing options?  Say for example you opened another DC and wanted to copy all of the config from the previous ones – reuse all of your IP – well, do I have some good news for you..  Well indifferent news – it can be done.

The first step to getting this working is brain storming, one of my fav people to brainstorm with is Shae.  We can have quick single syllable word conversations, grunt a bit – but at the end of the day articulate an elegant solution to some amazing technical problems,  I’m very lucky to have peers like this to work with.  Shae came up with the idea of using a par file to complete this task – and that was a great idea!  I can easily create a project with SQL, populate it with all of the versions I need to copy.  I can also create all of the F983051 and central objects to create the base objects, but I’d need to use load testing or scripts to change all of the PO’s and data selection.

Shae’s idea to use the parfile was great, it seemed possible.  The client in question has about 500 versions all for a particular DC, and I needed to change names, PO;s and data selections based upon the new name change – okay – challenge accepted.

There are heaps of ways of doing this – java, node.js, lambda, vbscript – I went old school – a little bit of sed and awk.

I basically took the parfile, sftpd it to linux and then ripped it apart.

The structure was not too crazy to deal with, although it did feel like Russian dolls, where there was a zip file in a zip file in a zip file.

There was also some pretty funky things like unicode files in the middle not normal files and base64 strings for PO’s – but nothing was going to stop me.

What I’m going to do is just cut and paste the script here, you’ll get the idea of what needed to be done from the sections and the amazing comments.

In my example the version names, PO’s and data selection all changed from string TT to string GB – so it was really easy to apply these rules through the script.

At the end of the day, this created a separate par file that you can restore to a project when all of the new versions in it! Really nice.

There is a tiny bit of error handling and other things – but really just showing you what can be done.

Imagine if you needed to change the queue on 100’s of versions or anything like this.  You could use some of the logic below to get it done (or be nice to me).


if [ $# -ne 3 ]
   then
     echo 'USAGE $0 <parfile> <FROm STRING> <TO STRING>'
     exit
fi

_debug=1
workingDir=/tmp
parfile=$1
fromString=$2
toString=$3
parfileNoExt=`echo $parfile | awk -F. '{print $1}'`
expDir=$workingDir/$parfileNoExt

rm -fR $expDir

#unzip the file to the working dir
unzip $parfile -d $expDir

for file in `ls $expDir/*.par`
   do
     #echo $file
     dir=`echo $file | awk -F. '{print $1}'`
     #echo $dir
     unzip -q $file -d $dir
done

#parfile
# parfile UBEVER_R57000035_PP0001_60_99
#   F983051.xml
#   F983052.xml
#   manifest.xml
#   specs.zip
#   RDASPEC
#    R5700036.PP0002.1.0.0.0.0.xml
#

#now lets extract the specs zip file for each

find $expDir -name specs.zip -execdir unzip -q \{} \;

#now delete par files and all else

find $expDir -name '*.par' -exec rm \{} \;
find $expDir -name specs.zip -exec rm \{} \;

# now we need to rename directories
if [ $_debug = 1 ]
then
   echo "RENAME DIRS"
fi

cd $expDir
for dir in `ls -d *${fromString}*`
do
   echo $dir
   newname=`echo $dir | sed s/_${fromString}/_${toString}/g`
   newname=`basename "$newname"`
   echo $newname
   cd $expDir
   mv $dir $newname
done

#reanme files, generally in the spec dir
#for file in `find $expDir -name "*${fromString}*.xml" -type f -print`
#holy crap, that took a long time to encase this with double quotes so as not to lose the
#dodgey versions
if [ $_debug = 1 ]
then
   echo "RENAME FILES"
fi

find $expDir -name "*${fromString}*.xml" -type f |while read file; do
     newfile=`basename "$file"`
     newfile=`echo "$newfile" | sed s/${fromString}/${toString}/2`
     currDir=`dirname "$file"`
     mv "$file" "$currDir/$newfile"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         sleep 10
         exit
     fi
done

#filelist="`find $expDir -name "*${fromString}*.xml" -type f -print`"
#echo $filelist
#for file in $filelist
   #do
     #newfile=`basename "$file"`
     #newfile=`echo "$newfile" | sed s/${fromString}/${toString}/g`
     #currDir=`dirname "$file"`
     #mv "$file" "$currDir/$newfile"
     #if [ $? -ne 0 ]
       #then
         #echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         #sleep 10
         #exit
     #fi
#done

if [ $_debug = 1 ]
then
   echo "SED CONTENTS OF FILES AND CREATE .NEW"
fi

#This is ridiculous - I need to convert manifest.xml
#from utf-16 to utf-8 and grep and then back again
# this is killing me
echo 'CONVERTING MANIFEST.XML'
for file in `find $expDir -name manifest.xml -print`
do
   echo $file
#  iconv -f utf-16 -t utf-8 $file | sed s/${fromString}[0-9]/${toString}/g > $file.utf8
   iconv -f utf-16 -t utf-8 $file | sed s/${fromString}0/${toString}0/g > $file.utf8
   iconv -f utf-8 -t utf-16 $file.utf8 > $file
   rm $file.utf8
done
  
#okay, now for the contents of the files
set -x
grep -r -l "${fromString}" $expDir | while read file; do
#for file in "`grep -R -l ${fromString} $expDir/*`"
#  do
     newfile=`echo "${file}.new"`
     echo $file "contains $fromString"
     cat "${file}" | sed s/${fromString}/${toString}/g > "${newfile}"
     #note that if you need to compare the internals of the files
     #comment out the following lines.
     rm "$file"
     mv "$newfile" "$file"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FROMFILE:$newfile:" "TOFILE:$file:"
         sleep 10
         exit
     fi
done

#Need to decode the base63 PO string and replace fromString there too
#find the F983051's
#create a variable for PODATA, check for PP
echo "Processing F983051 VRPODATA"
for file in `find $expDir -name F983051.xml -print`
do
   base64String=`cat $file | tr -s "\n" "@" | xmlstarlet sel -t -v "table/row/col[@name='VRPODATA']"  | base64 -d`
   charCount=`echo $base64String | wc -c`
   if [ $charCount -gt 1 ]
     then
     base64String=`echo $base64String | sed s/${fromString}/${toString}/g`
     echo 'changed string:' $base64String
     base64String=`echo $base64String | base64`
     xmlstarlet ed --inplace -u "table/row/col[@name='VRPODATA']" -v $base64String $file
     #just need to run the xmlstarlet ed
   fi
done

# find $expDir -name '*.new' -print | wc -l

#so now we replace the .new with the original
#job done...
#need to zip everything back up

if [ $_debug = 1 ]
then
   echo "Creating spec.zip"
fi

for dir in `ls -d $expDir/*`
   do
     cd $dir
     zip -r specs.zip ./RDASPEC
     zip -r specs.zip ./RDATEXT
     rm -fr ./RDASPEC ./RDATEXT
done

#now we create all of the par files from the dirs under expDir

for dir in `ls -d $expDir/* |grep drwx`
   do
     cd $expDir
     zip -r ${dir}.par `basename $dir`
     rm -rf $dir
done

#now the root parfile
cd $expDir
rm -rf ../${parfile}.zip
zip -r ../${parfile}.zip *

Friday 1 September 2017

JD Edwards and microservice based integrations

The cloud is changing our approaches to everything, and so it should.  It gives us so many modern and flexible constructs which can enable faster innovation and agility and deliver value to the business faster.

You can see from my slide below that we should be advocating strategic integrations in our organisations, seen below as a microservice layer.  This single layer gives a consistent interface “write the code once” approach to exposing JD Edwards to BOB (Best of Breed) systems.  This also will allow generic consumption and expose of web services – where you do not have to write a lot of JD Edwards code, or get into too much technical debt.

If you look at the below, we are exposing an open method of communicating with our “monolithic” and potentially “on prem” services.  This microservice layer can actually be in the cloud (and I would recommend this).  You could choose to use a middleware to expose this layer, or generic pub/sub techniques that are provided to you by all of the standard public cloud providers.


image


Looking at a little more detail in the below diagram for JDE shows you the modern JDE techniques for achieving this.  You’d wrap AIS calls to be STANDARD interactions to standard forms.  Just like BSSV was created to “AddSalesOrder”, the same could be done in a microservice.  This would be responsible for calling the standard and specific screens in JDE via AIS.  You are therefore abstracting yourself from the AIS layer.  If you needed to augment that canonical from information from another system, you are not getting too invested in JDE – it’s all in your microservice layer.

This also gives you the added benefit of being able to rip and replace any of the pieces of the design, as you’ve created a layer of abstraction for all of your systems – Nice.  Bring on best of breed.

The other cool thing about an approach like this is that you can start to amalgamate your “SaaS silos” which is the modern equivalent of “disconnected data silos”.  If your business is subscribing to SaaS services, you have a standardised approach of being able to get organisational wide benefit from the subscription.

Outbound from JDE, you can see that we are using RTE’s.  These might go directly to a AWS SQS queue, or they might also go to google subscriber queue or Microsoft Azure cloud services.  All could queue these messages.  The beauty of this is that the integration points already exist in JDE as RTE’s.  You just need to point these queues (or TXN server) to your middleware or cloud pub/sub service for reliable and fault tolerant delivery.  You can then have as many microservice subscribe to these messages and perform specific and independent tasks based upon the information coming in.


image

Wow, JDE has done a great job of letting you innovate at the speed of cloud by giving you some really cool integration methods.  There is nothing stopping you plugging in IoT, mobility, integrations, websites, forms and more into JD Edwards simply and easily.  Also giving you a extremely robust and secure ERP doing ensuring master data management and a single source of truth.

This model works on prem, hybrid or complete cloud.