Friday 31 October 2014

JD Edwards performance improvements, analysis and holistic approach

I should make some observations about performance analysis and try to group them into some logical areas.

Firstly you need to quantify performance against a known benchmark or your own benchmark.  In general, everyone’s ERP is different so the most beneficial comparisons will come from your own benchmarks.  Performance can generally ALWAYS be improved.  It can be improved with hardware, software, code, data or process changes or a combination of the 5.

Where do performance problems commonly start from:

This is a difficult question to answer – but, I’ll provide a list and some examples

  • database – this is generally the cause of many performance issues, especially for an ERP like JD Edwards.  JD Edwards is database agnostic, which means that it’s queries and interface to the database are written generically.  This is fantastic if you ever want to change database platform, but it comes at a price.  The price is specific ability to implement database tuning on a platform by platform basis.  Don’t stop reading now, there is more.
    • Each database that is supported on JD Edwards gives you the ability to tune queries outside of the ERP, so this is not a limitation of the ERP – it’s how you manage it.  For example on an oracle database you cannot implement hints via the oci based API’s that JD Edwards provide you with.  But, you can implement a database profile to effect the same changes.
    • All databases allow you to look at poor performing statements and address them with various panacea’s.  Indexes, etc.
    • Size of data.  This is critical, I always recommend to maintain the database size and then you’ll not be continuing to tune – as everything will be in a nice performance equilibrium.   Statistics won’t need to change – everything will be running sweetly.
    • Indexing in general.  Look at your read to write ratio’s, generally a new index is not going to hurt you.  If you are worried about updating indexes and the time it takes (you should), then look at what indexes have not been used over the last 6 months and drop them (yes – you heard me).
  • Hardware allocations.  this is the most basic problem with the most basic fix (unless you have all 128 cores allocated and you still need more power).  This is simple to diagnose and simple to fix in their virtual world – look at the stats and allocate more where needed.  More memory on the DB server, bring the data closer to the CPU!
  • Software tuning.  Generally, this is where all of the action is going to occur and where you are going to get a lot of gains.  This can be changes to code, changes to process or changes to configuration.  Look at the classic bug with urnandom that I posted about earlier.  This will bring a site to it’s knees, and you’ll spend weeks trying to find it.  I’ve seen some of the biggest improvements come from process change, where the change of a PO (for example) get’s the software to skip a certain unrequired step and turn it all around. 
  • network, this is becoming less critical given the 1GB and 10GB networks that are being seen at the moment.  WANs are getting better and network prioritisation is getting better, so I see less and less of this being problematic.

What tools are available to you to fix performance issues in JD Edwards?

Native:

Careful analysis of batch performance (using queries like have been identified in the link.

Careful analysis of BSFN timings that you can extract from Server Manager are great for working out if there are particular BSFNs that are slow.

Performance Workbench:

download the Myriad tool that gives sub second time stamps on JD Edwards database and web functions.  This benchmarks performance and allows comparison graphing with old results.  Myriad will even send you information on how you stack up with industry standards – how your site compares!  Myriad can tell you were you rank in terms of the many clients that have ran the software.

ERP Analytics:

Gives you unparalleled insight into performance and end user activity in your ERP.  Who is doing what and when.  Ever wanted to see average performance with a geo overlay – easy. 

image

Average page load time: The average amount of time (in seconds) it takes that page to load, from initiation of the pageview (e.g., click on a page link) to load completion in the browser.

 

image

Average server response time:  The time for your server to respond to a user request, including the network time from the user’s location to your server

We track all these metrics and more and allow you to view them by application, user, region (pending network geography and more).

Remember to see here for more information.

Load Testing as a Service:

Actually load test JD Edwards with specific oracle tools and specific JD Edwards load testing knowledge.  Get Myriad to load test your ERP with JD Edwards technical experts, people who know how fast JD Edwards can go and how to make improvements that make the ERP faster.

JD Edwards load testing

Tuesday 28 October 2014

cookies… No more… I’ve had enough!

I’ve had a number of issues with cookies lately, which means I’ve had a number of issues with security.  What else does this mean, I’m working with things that people are trying to keep quiet.  If you follow this blog a little, you’ll recall the PS_TOKEN saga (which is still ongoing).  At this stage we might be using the F5 to not send the token to JDE and therefore prevent the login issues.  How painful is that!

We’ve had another interesting scenario, once again involving cookies in iframes.  My client has a 3rd part application for AP scanning.  This write’s URL’s to PO’s, so that the original PO can be viewed from JD Edwards in the media object iframe.  Sure, this is nice, but it was not working with some painful “to and fro” of cookies.  Once again I was using the F12 functionality in IE9 (yes – I know – old school) to work out what was going on.

I found that the iframe cookie is the “poorer cousin” of the browser, especially where ie9 is concerned.  Once again, chrome just keeps trucking – loves Iframes and mauls their cookies…  I like chrome more and more.

At the end of the day, I was able to see in the security. Settings -> safety -> web page privacy settings… then sites.  Make sure that the sites that are in the URL in media objects are listed as safe / allow cookies (mine were blocked).

The other thing that I did was (which was easier).  Was go to trusted sites and add this site https://apscanning.stuff.things to the list of trusted sites.

image

Now I can click on my URL type attachment and go straight into the scanning application!  The browser allows the iframe to load the cookie and treats it like a peer.

Note that this was analysed by using a combination of network trace in development tools in internet explorer and a little bit of “Microsoft Network Monitor 3.4”.  I must admin that for a simple and free tool, this is really good for numpties like me to analyse traffic.

Thursday 16 October 2014

oracle accounts expired, don’t let it happen to you

 

A quick gotcha.  oracle likes expiring passwords every 180 days.

Don’t let this happen to your awesome JD Edwards implementation on oracle.

Ensure that you have a default profile set up and ensure that you have go no expiry on the PASSWORD_LIFE_TIME!

spool PasswordsAndProfileSummary.txt
set feedback on
set echo on
set linesize 1000
set timing on
connect jde/password@e1prod;
select username, account_status, expiry_date, profile from dba_users;
select resource_name,limit from dba_profiles;
connect jde/password@e1dev;
select username, account_status, expiry_date, profile from dba_users;
select resource_name,limit from dba_profiles;

spool off;

Which will produce

SYS                            OPEN                                       DEFAULT                                           SYSTEM                         OPEN                                       DEFAULT                                       OUTLN                          EXPIRED & LOCKED                 22-OCT-13 DEFAULT                    55 rows selected.

Elapsed: 00:00:00.05
SQL> select resource_name,limit from dba_profiles;

RESOURCE_NAME LIMIT -------------------------------- ---------------------------------------- FAILED_LOGIN_ATTEMPTS 10 PASSWORD_GRACE_TIME 7 PASSWORD_VERIFY_FUNCTION NULL PASSWORD_REUSE_MAX UNLIMITED PASSWORD_LIFE_TIME UNLIMITED

So if the password_life_time for the default profile is not unlimited, change it with:

alter profile default limit password_life_time unlimited ;

Then you’ll be cooking with gas.

Performance and tuning pack, JD Edwards and oracle 11G

Wow…

I’ve done lots of oracle tuning before, but it’s always been reactive.  That UBE is slow, okay… run with debugging…  Find slow statements… tune them…  Things have just got a whole lot better if you are licensed for the performance and tuning pack.

If you have this, just sit back and relax…

Then find the screen that says “Automatic SQL Tuning Result Summary” and look through the recommendations. 

clip_image002

Look at the sceen above, you can drill down into recommendations based upon SQL Profiles, indexes or change of statement.

Sample drill down is below, showing you anticipated savings for implementing the recommendations.

clip_image002[6]

You can get a global summary of index recommendations

clip_image002[4]

Jeepers! 

This thing is amazing.  In one foul swoop you can adjust many of the poor performing statements that are being executed against the entire database.  With a little bit of JD Edwards knowledge, you know what to fix and what not to.

So, I’ve gone ahead and implemented a pile of SQL_Profiles (and a couple of indexes) to fix some poor performing SQL.  I can’t wait to see the integrities run a little better in the morning.

SELECT NAME, SQL_TEXT, CATEGORY, STATUS
FROM DBA_SQL_PROFILES;

Then if they do not work:

BEGIN
DBMS_SQLTUNE.DROP_SQL_PROFILE (
name => 'my_sql_profile'
);
END;
/
Note that SQL profiles are a more elegant way of getting hints to JD Edwards, as you cannot implement database hints in JDE.  Another poor mans way of doing this is forcing parallelism in a “login” trigger. 

Wednesday 15 October 2014

Cookies and PS_TOKEN and JD Edwards and trouble

 

We are getting many instances of “The JD Edwards EnterpriseOne token is invalid. Please sign in again”. What?  We’ve never had this before, we’ve just gone live.  What is going on?

It took some time, but we had the following to go on:

<PRE class="brush: powershell; auto-links: true; collapse: false; first-line: 1; gutter: true; html-script: false; light: false; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">JAS JDE logs:
12 Oct 2014 11:29:52,146 [SEVERE] - [JAS] User nullSecurityToken is not Validated.Security Server return error status: 16/ Invalid Token
12 Oct 2014 11:30:03,956 [SEVERE] - [JAS] User nullSecurityToken is not Validated.Security Server return error status: 16/ Invalid Token</PRE>

And on the server (security server)

9420/5292 MAIN_THREAD Sun Oct 12 11:29:52.162000	PSAuthToken.cpp561 
ValidateAuthToken: Token length(0) invalid

9420/5292 MAIN_THREAD Sun Oct 12 11:29:52.163000 jdeksec.c4587
Failed to validate auth token: token library unable to validate token

9420/5292 MAIN_THREAD Sun Oct 12 11:29:52.163001 jdeksec.c1387
KSEC0000400 - Failed to authenticate user for token: eSecInvalidToken: The security token is invalid

 


clip_image001


After quite a bit of time, we found that the browser was sending a PeopleSoft cookie (PS_TOKEN) to the JD Edwards web server.  This then thought it was relevant for E1 authentication and passed this to the Security Server.  The cookie was not in a format that the security server recognised, so it saw it was rubbish and then the client could not log in from that browser.


Arrggghhh! the cookie was supposed to be expire after the PSFT session, but in many circumstances it was hanging about.  We wrote some jsp to delete the cookie, but ie needed a restart!  Good old Chrome just kept trucking – what a browser.


Cookie management was much better in Chrome than internet explorer, also the ability to manipulate cookies (which might be seen as a “not so good thing”).


Oh yeah, and the reason why we are starting to get this now is that the domain that the PSFT cookie was defined for was too broad, and matched the domain for our load balanced VIP, so the browser just sends the generic cookie to any site that matches the broad domain definition (which is rubbish too!).

Data Dictionary comparison between DEV and PROD

DD synchronisation is important during a project.  It’s vital that the DD you are testing against (DVDD910) is the same as DD910.

So, ensuring that you are not missing DD items is easy, but what about changes.  Changes to length, lookup items, description, decimal places?  Well, do I have a convoluted process for you, firstly identify them with this SQL:

select * from dd910.f9210 t1 where not exists
(select 1 from dvdd910.f9210 t2
where t1.frdtai = t2.frdtai
AND t1.FRCLAS = t2.FRCLAS
AND t1.FRDTAT = t2.FRDTAT
AND t1.FRDTAS = t2.FRDTAS
AND t1.FRDTAD = t2.FRDTAD
AND t1.FRPDTA = t2.FRPDTA
AND t1.FRARRN = t2.FRARRN
AND t1.FRDVAL = t2.FRDVAL
AND t1.FRLR = t2.FRLR
AND t1.FRCDEC = t2.FRCDEC
AND t1.FRDRUL = t2.FRDRUL
AND t1.FRDRO1 = t2.FRDRO1
AND t1.FRERUL = t2.FRERUL
AND t1.FRERO1 = t2.FRERO1
AND t1.FRERO2 = t2.FRERO2
AND t1.FRHLP1 = t2.FRHLP1
AND t1.FRHLP2 = t2.FRHLP2
AND t1.FRNNIX = t2.FRNNIX
AND t1.FRNSY = t2.FRNSY
AND t1.FRRLS = t2.FRRLS
AND t1.FROWDI = t2.FROWDI
AND t1.FROWTP = t2.FROWTP
AND t1.FRCNTT = t2.FRCNTT
AND t1.FRSCFG = t2.FRSCFG
AND t1.FRUPER = t2.FRUPER
AND t1.FRALBK = t2.FRALBK
AND t1.FROWER = t2.FROWER
AND t1.FROER1 = t2.FROER1
AND t1.FROER2 = t2.FROER2
AND t1.FROWDR = t2.FROWDR
AND t1.FRODR1 = t2.FRODR1
AND t1.FRDBID = t2.FRDBID
AND t1.FRBFDN = t2.FRBFDN
AND t1.FREBID = t2.FREBID
AND t1.FRBFEN = t2.FRBFEN
AND t1.FRSFID = t2.FRSFID
AND t1.FRSFMN = t2.FRSFMN
AND t1.FRBVID = t2.FRBVID
AND t1.FRBVNM = t2.FRBVNM
AND t1.FRPLFG = t2.FRPLFG
AND t1.FRDDID = t2.FRDDID
AND t1.FRAUIN = t2.FRAUIN
) order by t1.frdtai desc;

Nice, so now we have our list of problem items, we need to then identify what is different, that is done by copying the results of the above into a spreadsheet.


You’ll have two tabs, one with the contents of the above and the other with these:

select * from dvdd910.f9210 t1 where not exists
(select 1 from dd910.f9210 t2

 


So now with those two tabs created and labelled, you can run the macro below.  This is going highlight the differences and also concatenate the results to the original

Sub compareSheets(shtBefore As String, shtAfter As String)
Dim mycell As Range
Dim mydiffs As Integer

'If current cell is not a date then proceed (else skip and go to next), then 'if not same as corresponding cell in sheet After,
'mark as yellow and repeat until entire range is used

For Each mycell In ActiveWorkbook.Worksheets(shtAfter).UsedRange
If Not IsDate(mycell) Then
If Not mycell.Value = ActiveWorkbook.Worksheets(shtBefore).Cells(mycell.Row, mycell.Column).Value Then
mycell.Interior.Color = vbGreen
mycell.Value = mycell.Value & "||" & ActiveWorkbook.Worksheets(shtBefore).Cells(mycell.Row, mycell.Column).Value
mydiffs = mydiffs + 1
End If
End If
Next

'Display a message box stating the number of differences found
MsgBox mydiffs & " differences found", vbInformation
ActiveWorkbook.Sheets(shtAfter).Select
End Sub

Sub RunCompare()

Call compareSheets("DVDD910", "DD910")

End Sub

Note that you might need to ensure that there is the same record count in both tabs, otherwise everything will be marked different.


clip_image002

Sunday 12 October 2014

The server is taking an excessive time to process your request

Classic message from the JD Edwards menus, very bizarre scenario.  We’ve been working on a go-live and we are starting to get this popup and message on the client machines.

clip_image002

We’ve completed load testing, UAT, SIT, 12 months of project and have not had this issue and it starts on go-live weekend.  Who would have thought.

I was lucky enough to have a client to myself with the error and looked into the iexplore developer options (which are nice and catching up with Chromes). I was able to determine a couple of things from this error / message. 

Firstly, it’s all local, it’s all .js related – this is a good thing.  At least I’ve not broken anything on the servers.  When enabling a network trace, there is no traffic when the message occurs.  I also get all of my menu’s not rendering properly, duplicating other menus, not appearing.  Very strange.

clip_image002[4]

I can also see that the problems seem to be in the dropdownMenu.js file, but why is this occurring now.

continueWaiting = confirm("The server is taking an excessive time to process your request.  Click \"OK\" to continue waiting.");
if (continueWaiting)

The only thing I can think of is that the website was “skinned” with new gifs / png’s yesterday and this is playing havoc with the local machine cache.  I deleted my temporary internet files and also told the browser to “Always download a new version of the page”, and the problem as gone away.


clip_image002[6]


Above is a timer profile snap of when the issue was occurring.


So this seems to have fixed it on my machine, but I don’t want to have to do this on every machine tomorrow.  I also notice that there is a macafee script plugin in iexplore, this concerns me a little too.


Watch this space!


BTW, this was a great page on oracle support ttps://support.oracle.com/epmos/faces/DocContentDisplay?_afrLoop=64490606605294&id=1267490.1&_afrWindowMode=0&_adf.ctrl-state=7c29aqr3i_446

Friday 10 October 2014

vbscript tell me when my machine / website / anything

Here is a simple script that will tell you when a ip / name / web address comes up:

Dim strServerName, oFSO, WSHShell, strServerFile, PINGFlag, i, ServerPingFlag
Dim strLogFileName, strLogFolderName, strLogPath, objLogFolderName
Dim objLogFileName, objLogTextFile, strMyDate
Const ForAppending = 8
Public strMailTo, strSMTP, strSubject, strBody, strSMTPUserName, strSMTPPassword, strSMTPPort

Set oFSO = CreateObject("Scripting.FileSystemObject")
Set WSHShell = CreateObject("WScript.Shell")

'Keep looping back through the script so that it stays running
i = 0
Do While i = 0
serverList = ARRAY("aubdc00-jap01p","aubdc00-jap02p")

for each strServerName in serverList
If ServerPing(strServerName) Then
ServerPingFlag = "Online"
EmailAdmins strServerName
'Ping was successful
Else
'Ping was not successful
ServerPingFlag = "***Offline***"
End If
next
'This is in milliseconds, 5000 is 5 seconds
WScript.Sleep 60000
Loop

'********************************************************************************
'ServerPing Function
'Ping the server and if available return true, otherwise false
'********************************************************************************
Function ServerPing(strServerName)
Set WSHShell = CreateObject("WScript.Shell")
PINGFlag = Not CBool(WSHShell.Run("ping -n 1 " & strServerName, 0, True))
If PINGFlag = True Then
'Ping was successful
ServerPing = True
Else
'Ping not successful
ServerPing = False
End If
End Function

'********************************************************************************
'EmailAdmins Sub-Routine
'Send email to the admins
'********************************************************************************
Sub EmailAdmins (strServerName)
on error resume next
Dim iMsg
Dim iConf
Dim Flds
Const cdoSendUsingPort = 2
FromEmailAddress="jde@myriad-it.com"
Const strSmartHost = "smtpServerName.myriad-it.com"
Company="Myriad"
RecipientNames = ARRAY("smoir@myriad-it.com")


for each name in RecipientNames
'Create the message object
Set iMsg = CreateObject("CDO.Message")
'Create the configuration object
Set iConf = iMsg.Configuration
'Set the fields of the configuration object to send using SMTP via port 25.
With iConf.Fields
.item("http://schemas.microsoft.com/cdo/configuration/sendusing") = cdoSendUsingPort
.item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = strSmartHost
.Update
End With

'Set the message to,from,subject,body properties.
if strlen(attachment) > 1 then
With iMsg
.AddAttachment attachment
.To = name
.From = FromEmailAddress
.Subject = "Server is up" & " " & now() & strServerName
.TextBody = chr(1) & " " & strServerName
.Send
End With
else
With iMsg
.To = name
.From = FromEmailAddress
.Subject = "Server is up" & " " & now() & strServerName
.TextBody = chr(1) & " " & strServerName
.Send
End With
end if
set iMsg = Nothing
next

End Sub

This can easily be modified to do all sorts of emailing, when machines are up or down.  This was conceived because we were waiting for the AS/400 to come back up after a backup, so instead of hanging around – we just awaited the email… (perhaps from the pub – perhaps not…)

Thursday 9 October 2014

Determine if oracle archive logging is enabled

This is some handy SQL that you can run without necessarily being a DBA or having a SYS role:

A database that has archive logging enabled will look like:

SELECT GROUP#, ARCHIVED
FROM V$LOG;

4 YES
5 YES
6 YES
7 NO

SELECT LOG_MODE FROM V$DATABASE;
ARCHIVELOG

 


A database without archive logging enabled will look like:

SELECT GROUP#, ARCHIVED
FROM V$LOG;

4 NO
5 NO
6 NO
7 NO
8 NO
9 NO

And
SELECT LOG_MODE FROM V$DATABASE;

NOARCHIVELOG

Wednesday 8 October 2014

column count reconciliation in JD Edwards, specs vs reality

Below is some sample SQL that will create a reconciliation table (fia_temp_table_count) and then populate this with the central objects count of columns for that table.  It’ll also put in the relational database count and then show you a nice little summary of the differences.

This is quite nice for a quick (minutes) comparison of specs vs. relational database columns.

create table fia_temp_table_count 
(table_name varchar(50) not null,
spec_column_count integer not null,
relational_column_count integer not null)
;
drop table fia_temp_table_count ;
-- insert the relational column counts
--
insert into fia_temp_table_count
select table_name, 0, count(1)
from all_tab_columns
where owner = 'FIADTA'
group by table_name
order by table_name ;
commit;
-- get the spec table counts from the F98711
--
update fia_temp_table_count
set spec_column_count = (
select count(1)
from PY910.f98711
where ltrim(rtrim(tdobnm)) = table_name);
commit;
select table_name || ' ' || spec_column_count || ' ' || relational_column_count
from fia_temp_table_count
where spec_column_count != relational_column_count;

Note that an extension for this (and only for use by experts) is to use some of this to generate the DDL for creating the additional columns.  For example, just say I find that a table in FIADTA is missing some columns.  Well, I could run the below to give me the DML to make a correction to the table.  This is JUST for missing columns and you really need to know what you are doing to attempt it.  In this example, I’m specifying the F3703, but I could just as well do it for all of the results of the last SQL above.


Remember, you don’t need to run the results, but the output will assist you!


I did two statements, you probably only need 1 using data_type.  Mine was a little more complicated because I has some NCHAR and CHAR inconsistencies.

select 'ALTER TABLE FIADTA.F3703 ADD  ' || t2.column_name || ' ' || data_type || ';' 
from all_tab_columns t2
where t2.owner = 'FBADTA'
and t2.table_name = 'F3703'
and not exists
(
select 1
from all_tab_columns t1
where t1.owner = 'FIADTA'
and t1.table_name ='F3703'
and t2.column_name = t1.column_name)
and t2.data_type = 'NUMBER'
union
select 'ALTER TABLE FIADTA.F3703 ADD ' || t2.column_name || ' ' || data_type || '(' || data_length || ');'
from all_tab_columns t2
where t2.owner = 'FBADTA'
and t2.table_name = 'F3703'
and not exists
(
select 1
from all_tab_columns t1
where t1.owner = 'FIADTA'
and t1.table_name ='F3703'
and t2.column_name = t1.column_name)
and t2.data_type = 'NCHAR'
;

Monday 6 October 2014

quick guide to statistics and individual generation for oracle database

This is a quick way to restore some performance equivalence between environments, or after some heavy data loads.  I was having a report (integrity) finish in 9 minutes in PY and then 45 minutes in PD (actually 3 different integrities were slower in PD).  PD had a faster database, more memory, more RAM, what was going on.  Data was identical – it was a copy.

It all came down to stats, but also interestingly when the DBA ran their standard stat’s generation – it went slow again…  weird.

I ran the following for the table

begin 
DBMS_STATS.GATHER_TABLE_STATS (
ownname =&gt; '"PDFINDTA"',
tabname =&gt; '"F0911"',
estimate_percent =&gt; 10
);
end;

Then executed this for the indexes.

select 'ALTER INDEX "PDFINDTA"."' || index_name || '" compute statistics;' 
from all_indexes where table_name = 'F0911' and owner = 'PDFINDTA' ;

Then ran all of the results of the above, which looked somewhat like below:

alter index "PDFINDTA"."F0011_0" compute statistics

Thursday 2 October 2014

using OATs to monitor JD Edwards performance testing

OAT’s is cool, but you can make it uber cool.

For my load testing, I create a bunch of custom monitors to monitor what I want, for example:

· Number of batch jobs launched today

· Number of sales orders entered today

· Number of active processes in the database.

How, easy!!

select count(1) from svmap02.f986110 where jcsbmdate = (select max(jcsbmdate) 
from svmap02.f986110)

clip_image002

So now, when I do performance testing, I get to see the #of UBE’s submitted getting larger with my tests, therefore validating what is going on in the ERP.

So, easy to get this done with other tables too.

Remember that if you are doing business data tables, try and use the key and not scan the table too much with date queries.

There are many ways of doing this (sequence etc), but u might not want to use.

See now I have my own custom metrics that I can export and run anywhere.

clip_image004

Then, look at the great results:

clip_image002[4]

ORA-00020 when load testing

I have 400 concurrent users killing JD Edwards with batch and interactive jobs, but I’m chewing through the processes on oracle, so much so that I need to augment the count again and again.

select 'processes utilization from v$resource_limit', current_utilization, 'max' || max_utilization
from v$resource_limit where resource_name = 'processes';

select 'sessions utilization from v$resource_limit', current_utilization, 'max' || max_utilization
from v$resource_limit where resource_name = 'sessions';

 


Above I put sessions and processes, as they go hand in hand.


The above SQL will show you your maximum processes and also sessions, which is great. 


I thought that I’d also show you the lifecycle of the change that you need to make.  Who say’s CNC’s don’t make great DBAs!

alter system set processes=950 scope=spfile;
System altered.

SQL&gt; shutdown abort
ORACLE instance shut down.
SQL&gt; startup
ORACLE instance started.

Total System Global Area 16702308352 bytes
Fixed Size 2238128 bytes
Variable Size 3489663312 bytes
Database Buffers 13186891776 bytes
Redo Buffers 23515136 bytes
Database mounted.
Database opened.

SQL&gt; show parameter process

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 1
cell_offload_processing boolean TRUE
db_writer_processes integer 1
gcs_server_processes integer 0
global_txn_processes integer 1
job_queue_processes integer 20
log_archive_max_processes integer 4
processes integer 850
processor_group_name string ****
SQL&gt; quit
Disconnected
</PRE>

Wednesday 1 October 2014

parallel syntax for altering your entire sql session

 

As If I’m not going to write a “on login” trigger that runs this…  Surely I’ll be 8 times faster than everyone else (as you know, or maybe you don’t – this does not work in all situations).

alter session force parallel DDL PARALLEL 8;
alter session force parallel DML PARALLEL 8;
alter session force parallel QUERY PARALLEL 8;