Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Latest Forum Threads in SAP on Oracle

older | 1 | .... | 6 | 7 | (Page 8) | 9 | 10 | .... | 95 | newer

    0 0

    Hi Experts

     

    I am restoring a database from an off line backup from tape , backup was taken three months back , backup utility is Commvault which uses brtools with the client. currently I donot see the corresponding  backup log ( beklodrv.aff) file ( 3 months old ) , according to the SAP note 1003028 which I am following needs the backup log file of the backup to start the  restore command.

     

    Is there anyway I can get the restore started without the backup log file with brrestore/ recover command ? I have a successfull backup in tape, not sure if the log file is there and donot know how to retrive the tape if it is there.

     

    How the log file has gone , how can keep the files in /oracle/SID/sapbackup for longer time ?

     

    Thanks

     

    Al Mamun


    0 0

    Hi All

     

    we have  around 124GB for stuck #$ table after online reorg fail,can you please update me how safe is command brspace -f tbreorg -t "*" -a cleanup  command in running system?, and its impact on EDI40 table?

     

    Thanks

    Dinesh


    0 0

    Dear Experts,

    we have scheduled SAP_SLD_DATA_COLLECT job via RZ70 in our ECC system.Most of the time it runs successfully but everyday

    once it get hanged.

    i checked SLD system also and everything is fine there. I have checked RFC   SLD_UC and SLD_NUC and both working fine.

    what might be the issue kindly suggest.kindly find dev_wx file below

     

    Environment:

    SAP ECC6

    Windows 2008 R2

    Oracle 11.3

     

     

    Warm Regards,

    Sumit Jha


    0 0
  • 09/05/13--20:19: brarchive is being failed
  • Hi,

    we have Oracle 11g and Linux redhat.

    We are trying to run /sapmnt/<SID>/exe/brarchive -u / -k yes -d disk -c -sd but it is giving the error message as below.

     

    BR0002I BRARCHIVE 7.20 (10)
    BR0006I Start of offline redolog processing: aemalgbk.svd 2013-09-06 05.10.44
    BR0484I BRARCHIVE log file: /oracle/<SID>/saparch/aemalgbk.svd
    BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
    BR0301E SQL error -1031 at location BrInitOraCreate-2, SQL statement:
    'CONNECT / AT PROF_CONN IN SYSOPER MODE'
    ORA-01031: insufficient privileges
    BR0303E Determination of Oracle version failed

    BR0007I End of offline redolog processing: aemalgbk.svd 2013-09-06 05.12.44
    BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
    BR0005I BRARCHIVE terminated with errors

     

    We have tried sap note

    Note 776505 - BR*Tools fail with ORA-01017 / ORA-01031 on Linux

    but issue is same.

    Please help me

    Regards

    Ganesh Tiwari


    0 0
  • 09/06/13--05:58: Moving mirrorlog files
  • Hi,

     

    I need to move the mirror log files (Oracle 11G)  from one drive to another on Windows Server 2008 R2.

     

    I have gone through a couple of posts but they describe different ways of doing it.

     

    I am not entirely sure as to how to proceed.

     

    Thanks.

     

    Best Regards,

    Anita


    0 0
  • 09/05/13--22:38: SAP Oracle Upgrade issue
  • Hello,

     

    Im going through an oracle upgrade from 10.2.0.4 to 11.2.0.3, I have already done the pre-upgrade tasks with no problems, however, I'm having a dump which says that I have a data block corrupted, should I try to go through dbua or it will give me an error in upgrade or post upgrade task?  I'd like to open an OSS message however as my database is still in 10G I'd probably not get too much support.

     

    Regards,

     

    JAM

     

    OS: AIX 7.1


    0 0

    Hi Gurus

    We are planning to setup Disaster Recovery site using oracle 11g dataguard.

    Would it be possible you to answer following queries?

     

    1) Is oracle dataguard is part of Oracle DVD set provided or downloaded from SAP?

    2) Do we need to procure license for oracle datagueard additionally from SAP?

    3) would it be possible to provide me any note number?

     

    Thanks and Regards

    Upendra


    0 0

    Dear all,

     

    Earlier we were taking Offline and online backup using TSM (Backup device used is util_file).

    Now we are setting up Diaster Recovery  and Target System System Copy installation has stopped in "Backup Restore phase".

    Please tel step by step to  triger Oracle Offline backup in tape .

    Regards,

    gayathri


    0 0
  • 09/09/13--00:34: Locking in R3
  • Hi All,

     

     

    I have a doubt , oracle by itself incorporates row level locking. Since SAP R3 runs on top of it the update queries should also incorporate row level locking.

    I feel row level locking dont stay for long on a database after the record has been updated.

    Then how can there be 8 thousand lock entries in a given point of time in a system. These locks do stay for more than 9 to 10hrs at times.

    Kindly provide some links to understand this concept. These lock entries are not following row level locking?

     

    Thanks,

    Swadesh


    0 0

    Hi Experts,

     

    Any one help me to set recommend oracle parameters for our SAP Environment & Number work processor.

     

    Performance tuning parameters for Oracle

     

    Our Environment:

     

    Oracle 11.2.0.3 

    Windows Server 2008 R2

    Physical memory 36GB

    Swap memory : 20GB

    721 EXT Patchlevel 100 Unicode 64bit

    2 server ( 1 SAP instance + 1 Dialog instance)

     

    Warms Regards,

    Vasan


    0 0
  • 09/09/13--04:18: ReturnCode -1403
  • Dear All,

    FI users are facing delay when they are  trying to save the data in F-29. Please check the attached trace and guide me about rectification of this error ReturnCode -1403.

    while system is running on window server 2008 with Oracle 10.2.0.5

     

     

    Regards,


    0 0

    Hi Experts

     

    I am restoring a database from an off line backup from tape , backup was taken three months back , backup utility is Commvault which uses brtools with the client. currently I donot see the corresponding  backup log ( beklodrv.aff) file ( 3 months old ) , according to the SAP note 1003028 which I am following needs the backup log file of the backup to start the  restore command.

     

    Is there anyway I can get the restore started without the backup log file with brrestore/ recover command ? I have a successfull backup in tape, not sure if the log file is there and donot know how to retrive the tape if it is there.

     

    How the log file has gone , how can keep the files in /oracle/SID/sapbackup for longer time ?

     

    Thanks

     

    Al Mamun


    0 0

    Dear Experts,

     

    Due to a GoLive Check recommendation, we have been tasked with the update of our Oracle patch from 11.2.0.3.0 to 11.2.0.3.7. Sadly, the installation of this patch has not gone as expected.

     

    I have installed the patch as per this link following all the recommendations such as making sure everything is stopped when needed, usage of command fuser for stale sessions, updated both OPatch and MOPatch to the latest available version, and so on.

     

    However, during the installation, out of the 61 patches that were supposed to be installed, only 30 were installed successfully. The remaining 31 patches either were not installed because of missing prerequisites, or conflicts. Except for 3 patches of the BSP that failed during the installation (9584028, 9458152, 14488478).

     

    This 3 patches all failed with the same error. It seems it is trying to copy from a folder to another folder and it is giving a "file doesn't exist" error.

    Note: A lot of people in the Internet / forums have issues throughout the installation because of authorization issues, this is NOT the case.

     

    All with the similar error: Copy Action: Source file /oracle/S11/112_64/.patch_storage/9584028_Jun_22_2012_11_39_40/files/sap/ora_upgrade/post_upgrade/post_upgrade_checks.sql" does not exist. 'oracle.rdbms, 11.2.0.3.0': Cannot copy file from 'post_upgrade_checks.sql

     

    The odd thing is that the patch was compiled in May 15 2013, but it is somehow generating a folder from folder Jun 22 2012....

     

    I am unable to restore any longer to the backup that was taking before the update as it has been several days since the update and I cannot restore and make the consultants lose a 7 day work load. So, I wonder:

     

    1.- How can I fix this issue? I mean, has anyone encountered the same problem with this patches.

    2.- If it is not fixed, are this patches critical? I mean, SAP said that this patches are NOT modifying Oracle binaries so I don't think they are so critical... but are they a must?

     

    Thank you for your time,

    Kind regards,

    PIU


    0 0

    Hello All,

     

    system is running SAP Net weaver 7.0 ,Oracle 10G .

     

    Can possible to upgrade kernel 700 to 720 ?

     

    Please let me know any prechecks before kernel upgrade ?

     

    Please advise .


    0 0

    Hello All,

     

    I need to Build a new system by using exciting system . we have already started Db export on source system.

     

    and New system also ready once export completed from source system and i need to build new system by using DB export of Source .

     

    Please advise/Provide any system and provide pre checks before DB import on Target system pls ?

     

    Thanks .

     

    Source system :

    AIX, Oracle11G and ECC6.0

     

    target also AIX and oracle 11G already installed .


    0 0

    Hello, experts.

     

    Now, I'm doing systemcopy of distributed system with R3load. The DB Instance is Oracle Exadata.

    When I execute "Operating System Users and Groups" under the "Additional Preparations Options", an error occur.

    And in the "Create users for SAP system" phase, sapinst is disconnected suddenly.

     

    Could you help me to solve this?

     

    Error message in my console

    terminate called after throwing an instance of 'ESyAccountSystemCallFailedImpl_<ESyAccountSystemCallFailed>'

    1. iauxsysex.c:365: child /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356/sapinst (pid 32143) has crashed. Executable directory is /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356. Contact Support.
    2. iaextract.c:1094: child has signaled an exec error (-134). Keeping directory /u01/app/instlog/20130903_3/sapinst_exe.32131.1378186356

    --------------------------------

    Sep 3, 2013 2:37:17 PM [Info]: Stopping service "SAPinstService" ...

    Sep 3, 2013 2:37:17 PM [Info]: Service "SAPinstService" stopped.

    Sep 3, 2013 2:37:17 PM [Info]: Services stopped.

    Sep 3, 2013 2:37:17 PM [Info]: Server shutdown by SAPinstService

     

    =======

     

    I also tried to execute DB Instance Installation.

    However, a similar error occurred. In this time the message below I can find insapinst_dev_user_create.log.

     

    sapinst_dev_user_create.log

    …………………………

    At line 2362 file syuxcuser.cpp

    Call stack:

    1. iaxxbprocess.cpp: 36: CIaOsProcess::CEIdJanitor::~CEIdJanitor()
    2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
    3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
    4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
    5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

     

    Return value of function getpwnam(root) is NULL.

    Failed action:  with parameters

    Error number 207 error type SPECIFIC_CODE

     

     

    INFO       2013-09-02 19:43:40.455 [syuxccuren.cpp:285]

               CSyCurrentProcessEnvironmentImpl::setEffectiveGroup(PSyGroupInt)

               lib=syslib module=syslib

    Effective group id set to 2005.

     

    ERROR      2013-09-02 19:43:40.456 [syuxcuser.cpp:2360]

               CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

               lib=syslib module=syslib

    FSH-00006  Return value of function getpwnam(root) is NULL.

     

    TRACE      2013-09-02 19:43:40.456 [syuxcuser.cpp:231]

               CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

               lib=syslib module=syslib

    Exception thrown near line 2362 in file syuxcuser.cpp

    Stack trace:

    1. syuxccuren.cpp: 377: CSyCurrentProcessEnvironmentImpl::set(PSyProcessEnvironmentInt)
    2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
    3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
    4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
    5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

     

     

    At line 2362 file syuxcuser.cpp

    Call stack:

    1. syuxccuren.cpp: 377: CSyCurrentProcessEnvironmentImpl::set(PSyProcessEnvironmentInt)
    2. syuxccuren.cpp: 233: CSyCurrentProcessEnvironmentImpl::setEffectiveUser(PSyUserInt, const iastring&)
    3. syxxbuser.cpp: 130: *** syslib entry point CSyUser::getPrimaryGroup(void) const ***
    4. syuxcuser.cpp: 625: PSyGroupImpl CSyUserImpl::getPrimaryGroup()const
    5. syuxcuser.cpp: 2317: CSyUserImpl_getOsInfos(iastring sName, iastring sID, tSyUserInfo& msUserinfo)

     

    Return value of function getpwnam(root) is NULL.

    Failed action:  with parameters

    Error number 207 error type SPECIFIC_CODE

    …………………………

     

    Regards,

    Naomi Yamane


    0 0

    Hello,

    Now I'm doing  ERP installation with R3load.

     

    I exported R3data from source system successfully, however I noticed the password of Client 000 DDIC I know is incorrect.

     

    I hear that there can be a way to process installation though I don't know 000 DDIC password, but don't know its detail.

    If someone know it, please help me.

     

    Regards,

    Naomi Yamane


    0 0
  • 09/12/13--04:56: Redo Log backup
  • Dear All;

     

    I take every week an offline + redo log at the weekend.

     

    I already used the offline backup for many things such the quality refresh, but I never used the redo log backup.

     

    Can any one tell what are the redo logs are used for?

     

    Best Regards

    ~Amal Aloun


    0 0

    Hello Team,

     

    I'm setting online backup to tape by ARCServer, but I'm having problem when Brtools will open the oracle file system.


    I m using ECC 6.0 with oracle database on AIX system.


    Anyone have some document step by step to configuration online SAP Brtool backup by Arcserver?

     

    =========================================================================================

     

    '/oracle/SRQ/sapdata2/sr3_5/sr3.data5'.
     
      09/11 17:42:06(18808906) -
      09/11 17:42:06(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata2/sr3_8/sr3.data8'.
     
      09/11 17:42:12(18808906) -
      09/11 17:42:12(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata3/sr3_3/sr3.data3'.
     

    ===============================================================================

     


    Followsthe initSRQsap settings:

     

     

    =========================================================================================

     

    # @(#) $Id: //bas/720_REL/src/ccm/rsbr/initAIX.sap#11 $ SAP

    ########################################################################

    #                                                                      #

    # SAP BR*Tools sample profile.                                         #

    # The parameter syntax is the same as for init.ora parameters.         #

    # Enclose parameter values which consist of more than one symbol in    #

    # double quotes.                                                       #

    # After any symbol, parameter definition can be continued on the next  #

    # line.                                                                #

    # A parameter value list should be enclosed in parentheses, the list   #

    # items should be delimited by commas.                                 #

    # There can be any number of white spaces (blanks, tabs and new lines) #

    # between symbols in parameter definition.                             #

    # Comment lines must start with a hash character.                      #

    #                                                                      #

    ########################################################################

    # backup mode [all | all_data | full | incr | sap_dir | ora_dir

    # | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

    # | <generic_path> | (<object_list>)]

    # default: all

    backup_mode = all

    # restore mode [all | all_data | full | incr | incr_only | incr_full

    # | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

    # | <generic_path> | (<object_list>) | partial | non_db

    # redirection with '=' is not supported here - use option '-m' instead

    # default: all

    restore_mode = all

    # backup type [offline | offline_force | offline_standby | offline_split

    # | offline_mirror | offline_stop | online | online_cons | online_split

    # | online_mirror | online_standby | offstby_split | offstby_mirror

    # default: offline

    backup_type = online

    # backup device type

    # [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk

    # | disk_copy | disk_standby | stage | stage_copy | stage_standby

    # | util_file | util_file_online | util_vol | util_vol_online

    # | rman_util | rman_disk | rman_stage | rman_prep]

    # default: tape

    backup_dev_type = util_file_online

    # backup root directory [<path_name> | (<path_name_list>)]

    # default: $SAPDATA_HOME/sapbackup

    backup_root_dir = /oracle/SRQ/sapbackup

    # stage root directory [<path_name> | (<path_name_list>)]

    # default: value of the backup_root_dir parameter

    stage_root_dir = /oracle/SRQ/sapbackup

    # compression flag [no | yes | hardware | only | brtools]

    # default: no

    #compress = no

    # compress command

    # first $-character is replaced by the source file name

    # second $-character is replaced by the target file name

    # <target_file_name> = <source_file_name>.Z

    # for compress command the -c option must be set

    # recommended setting for brbackup -k only run:

    # "compress -b 12 -c $ > $"

    # no default

    compress_cmd = "compress -c $ > $"

    # uncompress command

    # first $-character is replaced by the source file name

    # second $-character is replaced by the target file name

    # <source_file_name> = <target_file_name>.Z

    # for uncompress command the -c option must be set

    # no default

    uncompress_cmd = "uncompress -c $ > $"

    # directory for compression [<path_name> | (<path_name_list>)]

    # default: value of the backup_root_dir parameter

    compress_dir = /oracle/SRQ/sapbackup

    # brarchive function [save | second_copy | double_save | save_delete

    # | second_copy_delete | double_save_delete | copy_save

    # | copy_delete_save | delete_saved | delete_copied]

    # default: save

    archive_function = save_delete

    # directory for archive log copies to disk

    # default: first value of the backup_root_dir parameter

    archive_copy_dir = /oracle/SRQ/sapbackup

    # directory for archive log copies to stage

    # default: first value of the stage_root_dir parameter

    archive_stage_dir = /oracle/SRQ/sapbackup

    # delete archive logs from duplex destination [only | no | yes | check]

    # default: only

    # archive_dupl_del = only

    # new sapdata home directory for disk_copy | disk_standby

    # no default

    # new_db_home = /oracle/C11

    # stage sapdata home directory for stage_copy | stage_standby

    # default: value of the new_db_home parameter

    # stage_db_home = /oracle/C11

    # original sapdata home directory for split mirror disk backup

    # no default

    # orig_db_home = /oracle/C11

    # remote host name

    # no default

    # remote_host = <host_name>

    # remote user name

    # default: current operating system user

    # remote_user = <user_name>

    # tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu

    # | rman_dd | rman_dd_gnu | brtools | rman_brt]

    # default: cpio

    tape_copy_cmd = cpio

    # disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu

    # | rman_set | rman_set_gnu | ocopy]

    # ocopy - only on Windows

    # default: copy

    disk_copy_cmd = rman_set

    # stage copy command [rcp | scp | ftp | wcp]

    # wcp - only on Windows

    # default: rcp

    stage_copy_cmd = rcp

    # pipe copy command [rsh | ssh]

    # default: rsh

    pipe_copy_cmd = rsh

    # flags for cpio output command

    # default: -ovB

    cpio_flags = -ovB

    # flags for cpio input command

    # default: -iuvB

    cpio_in_flags = -iuvB

    # flags for cpio command for copy of directories to disk

    # default: -pdcu

    # use flags -pdu for gnu tools

    cpio_disk_flags = -pdcu

    # flags for dd output command

    # default: "obs=16k"

    # recommended setting:

    # Unix:    "obs=nk bs=nk", example: "obs=64k bs=64k"

    # Windows: "bs=nk",        example: "bs=64k"

    dd_flags = "obs=64k bs=64k"

    # flags for dd input command

    # default: "ibs=16k"

    # recommended setting:

    # Unix:    "ibs=nk bs=nk", example: "ibs=64k bs=64k"

    # Windows: "bs=nk",        example: "bs=64k"

    dd_in_flags = "ibs=64k bs=64k"

    # number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]

    # default: 1

    saveset_members = 1

    # additional parameters for RMAN

    # following parameters are relevant only for rman_util, rman_disk or

    # rman_stage: rman_channels, rman_filesperset, rman_maxsetsize,

    # rman_pool, rman_copies, rman_proxy, rman_parms, rman_send

    # rman_maxpiecesize can be used to split an incremental backup saveset

    # into multiple pieces

    # rman_channels defines the number of parallel sbt channel allocations

    # rman_filesperset = 0 means:

    # one file per save set - for non-incremental backups

    # up to 64 files in one save set - for incremental backups

    # the others have the same meaning as for native RMAN

    # rman_channels = 1

    # rman_filesperset = 0

    # rman_maxopenfiles = 0

    # rman_maxsetsize = 0      # n[K|M|G] in KB (default), in MB or in GB

    # rman_maxpiecesize = 0    # n[K|M|G] in KB (default), in MB or in GB

    # rman_sectionsize = 0     # n[K|M|G] in KB (default), in MB or in GB

    # rman_rate = 0            # n[K|M|G] in KB (default), in MB or in GB

    # rman_diskratio = 0

    # rman_duration = 0        # <min> - for minimizing disk load

    # rman_keep = 0            # <days> - retention time

    # rman_pool = 0

    # rman_copies = 0 | 1 | 2 | 3 | 4

    # rman_proxy = no | yes | only

    # rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"

    # rman_send = "'<command>'"

    # rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",

    #              "channel sbt_2 '<command2>' parms='<parameters2>'")

    # rman_compress = no | yes

    # rman_maxcorrupt = (<dbf_name>|<dbf_id>:<corr_cnt>, ...)

    # rman_cross_check = none | archive | arch_force

    # remote copy-out command (backup_dev_type = pipe)

    # $-character is replaced by current device address

    # no default

    copy_out_cmd = "dd ibs=8k obs=64k of=$"

    # remote copy-in command (backup_dev_type = pipe)

    # $-character is replaced by current device address

    # no default

    copy_in_cmd = "dd ibs=64k obs=8k if=$"

    # rewind command

    # $-character is replaced by current device address

    # no default

    # operating system dependent, examples:

    # HP-UX:   "mt -f $ rew"

    # TRU64:   "mt -f $ rewind"

    # AIX:     "tctl -f $ rewind"

    # Solaris: "mt -f $ rewind"

    # Windows: "mt -f $ rewind"

    # Linux:   "mt -f $ rewind"

    rewind = "tctl -f $ rewind"

    # rewind and set offline command

    # $-character is replaced by current device address

    # default: value of the rewind parameter

    # operating system dependent, examples:

    # HP-UX:   "mt -f $ offl"

    # TRU64:   "mt -f $ offline"

    # AIX:     "tctl -f $ offline"

    # Solaris: "mt -f $ offline"

    # Windows: "mt -f $ offline"

    # Linux:   "mt -f $ offline"

    rewind_offline = "tctl -f $ offline"

    # tape positioning command

    # first $-character is replaced by current device address

    # second $-character is replaced by number of files to be skipped

    # no default

    # operating system dependent, examples:

    # HP-UX:   "mt -f $ fsf $"

    # TRU64:   "mt -f $ fsf $"

    # AIX:     "tctl -f $ fsf $"

    # Solaris: "mt -f $ fsf $"

    # Windows: "mt -f $ fsf $"

    # Linux:   "mt -f $ fsf $"

    tape_pos_cmd = "tctl -f $ fsf $"

    # mount backup volume command in auto loader / juke box

    # used if backup_dev_type = tape_box | pipe_box

    # no default

    # mount_cmd = "<mount_cmd> $ $ $ [$]"

    # dismount backup volume command in auto loader / juke box

    # used if backup_dev_type = tape_box | pipe_box

    # no default

    # dismount_cmd = "<dismount_cmd> $ $ [$]"

    # split mirror disks command

    # used if backup_type = offline_split | online_split | offline_mirror

    # | online_mirror

    # no default

    # split_cmd = "<split_cmd> [$]"

    # resynchronize mirror disks command

    # used if backup_type = offline_split | online_split | offline_mirror

    # | online_mirror

    # no default

    # resync_cmd = "<resync_cmd> [$]"

    # additional options for SPLITINT interface program

    # no default

    # split_options = "<split_options>"

    # resynchronize after backup flag [no | yes]

    # default: no

    # split_resync = no

    # pre-split command

    # no default

    # pre_split_cmd = "<pre_split_cmd>"

    # post-split command

    # no default

    # post_split_cmd = "<post_split_cmd>"

    # pre-shut command

    # no default

    # pre_shut_cmd = "<pre_shut_cmd>"

    # post-shut command

    # no default

    # post_shut_cmd = "<post_shut_cmd>"

    # pre-archive command

    # no default

    # pre_arch_cmd = "<pre_arch_cmd> [$]"

    # post-archive command

    # no default

    # post_arch_cmd = "<post_arch_cmd> [$]"

    # pre-backup command

    # no default

    # pre_back_cmd = "<pre_back_cmd> [$]"

    # post-backup command

    # no default

    # post_back_cmd = "<post_back_cmd> [$]"

    # volume size in KB = K, MB = M or GB = G (backup device dependent)

    # default: 1200M

    # recommended values for tape devices without hardware compression:

    # 60 m   4 mm  DAT DDS-1 tape:    1200M

    # 90 m   4 mm  DAT DDS-1 tape:    1800M

    # 120 m  4 mm  DAT DDS-2 tape:    3800M

    # 125 m  4 mm  DAT DDS-3 tape:   11000M

    # 112 m  8 mm  Video tape:        2000M

    # 112 m  8 mm  high density:      4500M

    # DLT 2000     10/20 GB:         10000M

    # DLT 2000XT   15/30 GB:         15000M

    # DLT 4000     20/40 GB:         20000M

    # DLT 7000     35/70 GB:         35000M

    # recommended values for tape devices with hardware compression:

    # 60 m   4 mm  DAT DDS-1 tape:    1000M

    # 90 m   4 mm  DAT DDS-1 tape:    1600M

    # 120 m  4 mm  DAT DDS-2 tape:    3600M

    # 125 m  4 mm  DAT DDS-3 tape:   10000M

    # 112 m  8 mm  Video tape:        1800M

    # 112 m  8 mm  high density:      4300M

    # DLT 2000     10/20 GB:          9000M

    # DLT 2000XT   15/30 GB:         14000M

    # DLT 4000     20/40 GB:         18000M

    # DLT 7000     35/70 GB:         30000M

    tape_size = 100G

    # volume size in KB = K, MB = M or GB = G used by brarchive

    # default: value of the tape_size parameter

    # tape_size_arch = 100G

    # tape block size in KB for brtools as tape copy command on Windows

    # default: 64

    # tape_block_size = 64

    # rewind and set offline for brtools as tape copy command on Windows

    # yes | no

    # default: yes

    # tape_set_offline = yes

    # level of parallel execution

    # default: 0 - set to number of backup devices

    exec_parallel = 0

    # address of backup device without rewind

    # [<dev_address> | (<dev_address_list>)]

    # no default

    # operating system dependent, examples:

    # HP-UX:   /dev/rmt/0mn

    # TRU64:   /dev/nrmt0h

    # AIX:     /dev/rmt0.1

    # Solaris: /dev/rmt/0mn

    # Windows: /dev/nmt0

    # Linux:   /dev/nst0

    tape_address = /dev/rmt0.1

    # address of backup device without rewind used by brarchive

    # default: value of the tape_address parameter

    # operating system dependent

    # tape_address_arch = /dev/rmt0.1

    # address of backup device with rewind

    # [<dev_address> | (<dev_address_list>)]

    # no default

    # operating system dependent, examples:

    # HP-UX:   /dev/rmt/0m

    # TRU64:   /dev/rmt0h

    # AIX:     /dev/rmt0

    # Solaris: /dev/rmt/0m

    # Windows: /dev/mt0

    # Linux:   /dev/st0

    tape_address_rew = /dev/rmt0

    # address of backup device with rewind used by brarchive

    # default: value of the tape_address_rew parameter

    # operating system dependent

    # tape_address_rew_arch = /dev/rmt0

    # address of backup device with control for mount/dismount command

    # [<dev_address> | (<dev_address_list>)]

    # default: value of the tape_address_rew parameter

    # operating system dependent

    # tape_address_ctl = /dev/...

    # address of backup device with control for mount/dismount command

    # used by brarchive

    # default: value of the tape_address_rew_arch parameter

    # operating system dependent

    # tape_address_ctl_arch = /dev/...

    # volumes for brarchive

    # [<volume_name> | (<volume_name_list>) | SCRATCH]

    # no default

    volume_archive = (SRQA01, SRQA02, SRQA03, SRQA04, SRQA05,

                      SRQA06, SRQA07, SRQA08, SRQA09, SRQA10,

                      SRQA11, SRQA12, SRQA13, SRQA14, SRQA15,

                      SRQA16, SRQA17, SRQA18, SRQA19, SRQA20,

                      SRQA21, SRQA22, SRQA23, SRQA24, SRQA25,

                      SRQA26, SRQA27, SRQA28, SRQA29, SRQA30)

    # volumes for brbackup

    # [<volume_name> | (<volume_name_list>) | SCRATCH]

    # no default

    volume_backup = (SRQB01, SRQB02, SRQB03, SRQB04, SRQB05,

                     SRQB06, SRQB07, SRQB08, SRQB09, SRQB10,

                     SRQB11, SRQB12, SRQB13, SRQB14, SRQB15,

                     SRQB16, SRQB17, SRQB18, SRQB19, SRQB20,

                     SRQB21, SRQB22, SRQB23, SRQB24, SRQB25,

                     SRQB26, SRQB27, SRQB28, SRQB29, SRQB30)

    # expiration period in days for backup volumes

    # default: 30

    expir_period = 30

    # recommended usages of backup volumes

    # default: 100

    tape_use_count = 100

    # backup utility parameter file

    # default: no parameter file

    # null - no parameter file

    # util_par_file = initSRQ.utl

    # backup utility parameter file for volume backup

    # default: no parameter file

    # null - no parameter file

    # util_vol_par_file = initSRQ.vol

    # additional options for BACKINT interface program

    # no default

    # "" - no additional options

    # util_options = "<backint_options>"

    # additional options for BACKINT volume backup type

    # no default

    # "" - no additional options

    # util_vol_options = "<backint_options>"

    # path to directory BACKINT executable will be called from

    # default: sap-exe directory

    # null - call BACKINT without path

    # util_path = <dir>|null

    # path to directory BACKINT will be called from for volume backup

    # default: sap-exe directory

    # null - call BACKINT without path

    # util_vol_path = <dir>|null

    # disk volume unit for BACKINT volume backup type

    # [disk_vol | sap_data | all_data | all_dbf]

    # default: sap_data

    # util_vol_unit = <unit>

    # additional access to files saved by BACKINT volume backup type

    # [none | copy | mount | both]

    # default: none

    # util_vol_access = <access>

    # negative file/directory list for BACKINT volume backup type

    # [<file_dir_name> | (<file_dir_list>) | no_check]

    # default: none

    # util_vol_nlist = <nlist>

    # mount/dismount command parameter file

    # default: no parameter file

    # mount_par_file = initSRQ.mnt

    # Oracle connection name to the primary database

    # [primary_db = <conn_name> | LOCAL]

    # no default

    # primary_db = <conn_name>

    # Oracle connection name to the standby database

    # [standby_db = <conn_name> | LOCAL]

    # no default

    # standby_db = <conn_name>

    # description of parallel instances for Oracle RAC

    # parallel_instances = <inst_desc> | (<inst_desc_list>)

    # <inst_desc_list>   - <inst_desc>[,<inst_desc>...]

    # <inst_desc>        - <Oracle_sid>:<Oracle_home>@<conn_name>

    # <Oracle_sid>       - Oracle system id for parallel instance

    # <Oracle_home>      - Oracle home for parallel instance

    # <conn_name>        - Oracle connection name to parallel instance

    # Please include the local instance in the parameter definition!

    # default: no parallel instances

    # example for initRAC001.sap:

    # parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,

    # RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)

    # local Oracle RAC database homes [no | yes]

    # default: no - shared database homes

    # loc_ora_homes = yes

    # handling of Oracle RAC database services [no | yes]

    # default: no

    # db_services = yes

    # database owner of objects to be checked

    # <owner> | (<owner_list>)

    # default: all SAP owners

    # check_owner = SAPSR3

    # database objects to be excluded from checks

    # all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # default: no exclusion, example:

    # check_exclude = (SDBAH, SAPSR3.SDBAD)

    # special database check conditions

    # ("<type>:<cond>:<active>:<sever>:[<chkop>]:[<chkval>]:[<unit>]", ...)

    # check_cond = (<cond_list>)

    # database owner of SDBAH, SDBAD and XDB tables for cleanup

    # <owner> | (<owner_list>)

    # default: all SAP owners

    # cleanup_owner = SAPSR3

    # retention period in days for brarchive log files

    # default: 30

    # cleanup_brarchive_log = 30

    # retention period in days for brbackup log files

    # default: 30

    # cleanup_brbackup_log = 30

    # retention period in days for brconnect log files

    # default: 30

    # cleanup_brconnect_log = 30

    # retention period in days for brrestore log files

    # default: 30

    # cleanup_brrestore_log = 30

    # retention period in days for brrecover log files

    # default: 30

    # cleanup_brrecover_log = 30

    # retention period in days for brspace log files

    # default: 30

    # cleanup_brspace_log = 30

    # retention period in days for archive log files saved on disk

    # default: 30

    # cleanup_disk_archive = 30

    # retention period in days for database files backed up on disk

    # default: 30

    # cleanup_disk_backup = 30

    # retention period in days for brspace export dumps and scripts

    # default: 30

    # cleanup_exp_dump = 30

    # retention period in days for Oracle trace and audit files

    # default: 30

    # cleanup_ora_trace = 30

    # retention period in days for records in SDBAH and SDBAD tables

    # default: 100

    # cleanup_db_log = 100

    # retention period in days for records in XDB tables

    # default: 100

    # cleanup_xdb_log = 100

    # retention period in days for database check messages

    # default: 100

    # cleanup_check_msg = 100

    # database owner of objects to adapt next extents

    # <owner> | (<owner_list>)

    # default: all SAP owners

    # next_owner = SAPSR3

    # database objects to adapt next extents

    # all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # default: all abjects of selected owners, example:

    # next_table = (SDBAH, SAPSR3.SDBAD)

    # database objects to be excluded from adapting next extents

    # all_part | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # default: no exclusion, example:

    # next_exclude = (SDBAH, SAPSR3.SDBAD)

    # database objects to get special next extent size

    # allsel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]

    # | [<owner>.]<index>:<size>[/<limit>]

    # | [<owner>.][<prefix>]*[<suffix>]:<size>[/<limit>]

    # | (<object_size_list>)

    # default: according to table category, example:

    # next_special = (SDBAH:100K, SAPSR3.SDBAD:1M/200)

    # maximum next extent size

    # default: 2 GB - 5 * <database_block_size>

    # next_max_size = 1G

    # maximum number of next extents

    # default: 0 - unlimited

    # next_limit_count = 300

    # database owner of objects to update statistics

    # <owner> | (<owner_list>)

    # default: all SAP owners

    # stats_owner = SAPSR3

    # database objects to update statistics

    # all | all_ind | all_part | missing | info_cubes | dbstatc_tab

    # | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # | harmful | locked | system_stats | oradict_stats | oradict_tab

    # default: all abjects of selected owners, example:

    # stats_table = (SDBAH, SAPSR3.SDBAD)

    # database objects to be excluded from updating statistics

    # all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # default: no exclusion, example:

    # stats_exclude = (SDBAH, SAPSR3.SDBAD)

    # method for updating statistics for tables not in DBSTATC

    # E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H

    # | =I | =X | +H | +I

    # default: according to internal rules

    # stats_method = E

    # sample size for updating statistics for tables not in DBSTATC

    # P<percentage_of_rows> | R<thousands_of_rows>

    # default: according to internal rules

    # stats_sample_size = P10

    # number of buckets for updating statistics with histograms

    # default: 75

    # stats_bucket_count = 75

    # threshold for collecting statistics after checking

    # <threshold> | (<threshold> [, all_part:<threshold>

    # | info_cubes:<threshold> | [<owner>.]<table>:<threshold>

    # | [<owner>.][<prefix>]*[<suffix>]:<threshold>

    # | <tablespace>:<threshold> | <object_list>])

    # default: 50%

    # stats_change_threshold = 50

    # number of parallel threads for updating statistics

    # default: 1

    # stats_parallel_degree = 1

    # processing time limit in minutes for updating statistics

    # default: 0 - no limit

    # stats_limit_time = 0

    # parameters for calling DBMS_STATS supplied package

    # all:R|B|H|G[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

    # | all_part:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

    # | info_cubes:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

    # | [<owner>.]<table>:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

    # | [<owner>.][<prefix>]*[<suffix>]:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0

    # |<degree>|A|D | (<object_list>) | NO

    # R|B - sampling method:

    # 'R' - row sampling, 'B' - block sampling,

    # 'H' - histograms by row sampling, 'G' - histograms by block sampling

    # [<buckets>|A|S|R|D] - buckets count:

    # <buckets> - histogram buckets count, 'A' - auto buckets count,

    # 'S' - skew-only, 'R' - repeat, 'D' - default buckets count (75)

    # [A|I|P|X|D] - columns with histograms:

    # 'A' - all columns, 'I' - indexed columns, 'P' - partition columns,

    # 'X' - indexed and partition columns, 'D' - default columns

    # 0|<degree>|A|D - parallel degree:

    # '0' - default table degree, <degree> - dbms_stats parallel degree,

    # 'A' - dbms_stats auto degree, 'D' - default Oracle degree

    # default: ALL:R:0

    # stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R:<degree>,...)

    # definition of info cube tables

    # default | rsnspace_tab | [<owner>.]<table>

    # | [<owner>.][<prefix>]*[<suffix>] | (<object_list>) | null

    # default: rsnspace_tab

    # stats_info_cubes = (/BIC/D*, /BI0/D*, ...)

    # special statistics settings

    # (<table>:[<owner>]:<active>:[<method>]:[<sample>], ...)

    # stats_special = (<special_list>)

    # update cycle in days for dictionary statistics within standard runs

    # default: 0 - no update

    # stats_dict_cycle = 100

    # method for updating Oracle dictionary statistics

    # C - compute | E - estimate | A - auto sample size

    # default: C

    # stats_dict_method = C

    # sample size for updating dictionary statistics (stats_dict_method = E)

    # <percent> (1-100)

    # default: auto sample size

    # stats_dict_sample = 10

    # parallel degree for updating dictionary statistics

    # auto | default | null | <degree> (1-256)

    # default: Oracle default

    # stats_dict_degree = 4

    # update cycle in days for system statistics within standard runs

    # default: 0 - no update

    # stats_system_cycle = 100

    # interval for updating Oracle system statistics

    # 0 - NOWORKLOAD, >0 - interval in minutes

    # default: 0

    # stats_system_interval = 0

    # database objects to be excluded from validating structure

    # null | all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

    # | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

    # default: value of the stats_exclude parameter, example:

    # valid_exclude = (SDBAH, SAPSR3.SDBAD)

    # recovery type [complete | dbpit | tspit | reset | restore | apply

    # | disaster]

    # default: complete

    # recov_type = complete

    # directory for brrecover file copies

    # default: $SAPDATA_HOME/sapbackup

    # recov_copy_dir = /oracle/SRQ/sapbackup

    # time period in days for searching for backups

    # 0 - all available backups, >0 - backups from n last days

    # default: 30

    # recov_interval = 30

    # degree of paralelism for applying archive log files

    # 0 - use Oracle default parallelism, 1 - serial, >1 - parallel

    # default: Oracle default

    # recov_degree = 0

    # number of lines for scrolling in list menus

    # 0 - no scrolling, >0 - scroll n lines

    # default: 20

    # scroll_lines = 20

    # time period in days for displaying profiles and logs

    # 0 - all available logs, >0 - logs from n last days

    # default: 30

    # show_period = 30

    # directory for brspace file copies

    # default: $SAPDATA_HOME/sapreorg

    # space_copy_dir = /oracle/SRQ/sapreorg

    # directory for table export dump files

    # default: $SAPDATA_HOME/sapreorg

    # exp_dump_dir = /oracle/SRQ/sapreorg

    # database tables for reorganization

    # [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

    # | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

    # no default

    # reorg_table = (SDBAH, SAPSR3.SDBAD)

    # table partitions for reorganization

    # [[<owner>.]<table>.]<partition>

    # | [[<owner>.]<table>.][<prefix>]%[<suffix>]

    # | [[<owner>.]<table>.][<prefix>]*[<suffix>] | (<tabpart_list>)

    # no default

    # reorg_tabpart = (PART1, PARTTAB1.PART2, SAPSR3.PARTTAB2.PART3)

    # database indexes for rebuild

    # [<owner>.]<index> | [<owner>.][<prefix>]*[<suffix>]

    # | [<owner>.][<prefix>]%[<suffix>] | (<index_list>)

    # no default

    # rebuild_index = (SDBAH~0, SAPSR3.SDBAD~0)

    # index partitions for rebuild

    # [[<owner>.]<index>.]<partition>

    # | [[<owner>.]<index>.][<prefix>]%[<suffix>]

    # | [[<owner>.]<index>.][<prefix>]*[<suffix>] | (<indpart_list>)

    # no default

    # rebuild_indpart = (PART1, PARTIND1.PART2, SAPSR3.PARTIND2.PART3)

    # database tables for export

    # [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

    # | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

    # no default

    # exp_table = (SDBAH, SAPSR3.SDBAD)

    # database tables for import

    # <table> | (<table_list>)

    # no default

    # do not specify table owner in the list - use -o|-owner option for this

    # imp_table = (SDBAH, SDBAD)

    # Oracle system id of ASM instance

    # default: +ASM

    # asm_ora_sid = <asm_inst> | (<db_inst1>:<asm_inst1>,

    # <db_inst2>:<asm_inst2>, <db_inst3>:<asm_inst3>, ...)

    # asm_ora_sid = (RAC001:+ASM1, RAC002:+ASM2, RAC003:+ASM3, RAC004:+ASM4)

    # asm_ora_sid = +ASM

    # Oracle home of ASM instance

    # no default

    # asm_ora_home = <asm_home> | (<db_inst1>:<asm_home1>,

    # <db_inst2>:<asm_home2>, <db_inst3>:<asm_home3>, ...)

    # asm_ora_home = (RAC001:/oracle/GRID/11202, RAC002:/oracle/GRID/11202,

    # RAC003:/oracle/GRID/11202, RAC004:/oracle/GRID/11202)

    # asm_ora_home = /oracle/GRID/11202

    # Oracle ASM root directory name

    # default: ASM

    # asm_root_dir = <asm_root>

    # asm_root_dir = ASM

    ===========================================================================================

     
    Regards,

    Thiago


    0 0

    Hi all, When i am trying to run brconnect -u / -c -f stats -t oradict_stats  .. i am getting the following errors & i have followed sap Note 838725 - Oracle Database 10g: New database statistics  but however i am not cllear how should i iver come through this error.

     

     

    sapql2:oraql2 1> brconnect -u / -c -f stats -t oradict_stats

    BR0801I BRCONNECT 7.00 (16)

    BR0805I Start of BRCONNECT processing: cebliuwm.sta 2009-09-11 23.25.28

     

    BR0280I BRCONNECT time stamp: 2009-09-11 23.25.29

    BR0807I Name of database instance: QL2

    BR0808I BRCONNECT action ID: cebliuwm

    BR0809I BRCONNECT function ID: sta

    BR0810I BRCONNECT function: stats

    BR0812I Database objects for processing: ORADICT_STATS

    BR1314I Oracle dictionary statistics will be collected with default options

    BR0126I Unattended mode active - no operator confirmation required

     

    BR0280I BRCONNECT time stamp: 2009-09-11 23.25.29

    BR1311I Starting collection of Oracle dictionary statistics...

    BR0285I This function can take several seconds/minutes - be patient...

    BR0280I BRCONNECT time stamp: 2009-09-11 23.25.30

     

    BR0301E SQL error -20003 at location stats_oradict_collect-1, SQL statement:

    'BEGIN DBMS_STATS.GATHER_DICTIONARY_STATS (ESTIMATE_PERCENT => NULL, METHOD_OPT

    => 'FOR ALL COLUMNS SIZE AUTO', GRANULARITY => 'ALL', CASCADE => TRUE, OPTIONS =

    > 'GATHER', NO_INVALIDATE => FALSE); END;'

    ORA-20003: Specified bug number (5099019) does not exist

    ORA-06512: at "SYS.DBMS_STATS", line 14379

    ORA-06512: at "SYS.DBMS_STATS", line 14725

    ORA-06512: at "SYS.DBMS_STATS", line 17028

    ORA-06512: at "SYS.DBMS_STATS", line 17070

    ORA-06512: at line 1

    BR1313E Collection of Oracle dictionary statistics failed

     

     

    BR0806I End of BRCONNECT processing: cebliuwm.sta 2009-09-11 23.25.30

    BR0280I BRCONNECT time stamp: 2009-09-11 23.25.30

    BR0804I BRCONNECT terminated with errors

     

    regards,

    rahul


older | 1 | .... | 6 | 7 | (Page 8) | 9 | 10 | .... | 95 | newer