I get a notice that my current back-up failed to back up RETools(*:\). In the set up it treats RETOOLS as a disk that needs to be backed up along with C drive etc. I don't know what RETOOLS is or if I need it to back up. It will back up correctly several times in a row but then I will get the failed message again. I'm a home user with limited knowledge. Any help?
Inconsistent failure to back up with SSR 2013 R2
increase the number of results in search wizard
Hello,
I have backup exec 2014 SP2 on Server 2012 R2
when i research *.doc on backup set the search wizard found 10000 items,is that there is a way to increase the limit?
thanks
SQL backup are failing with error 23 (socket read failed )
Hi All,
File system backup are working fine but SQL backups are failing with error code 23 socket read failed .
Things i already checked:
1.I checked SQL server and netbackup client services.
2. registry entries.
pls provide me a solution.
job detail:
02/11/2015 21:39:05 - Error bpbrm (pid=1690) bpcd on <server name> exited with status 23: socket read failed
02/11/2015 21:44:09 - Error bpbrm (pid=1690) cannot send mail because BPCD on <server name> exited with status 23: socket read failed
02/11/2015 21:44:09 - Info bpbkar (pid=0) done. status: 23: socket read failed
02/11/2015 21:44:09 - end writing
socket read failed (23)
Multiple Locations but slow links
I am using Backupexec 2014. We have one media server but I am trying to understand how and if this will work.
We have a filer server running BE 2014 as the media server in location 1. BE runs fine to the local storage.
In location #2 we have another file server and a local NAS. The WAN link between the sites is sort of slow vs the amount of data. Can I setup a backup to disk in location #2 that will backup to the NAS also in Location #2. I want to do this without all the data going across the WAN link. But of course I want the media server in location #1 to monitor it all.
I am guessing I need another media server although no budget.
Thank you,
HMorris
UNC Path error in backup exec
Hi
When I want to create a storage from UNC Path in Backup exec I get this error " The path appears to be an invalid path. please ensure the server name and share name are correct. "
This is the error page :
So what should I do ? I have share the Archive and I have acces to that share folder.
Aix client backup getting code 14
Hi All,
We have a Aix server whose backup is consistanly getting a 14, I setup a bpbkar log and viewed it. I have just a portion of it below where the backup is getting a exit status of 14
I believe the issue is a journal.v1.dat file is there anyway to verify this?
8:00:51.499 [1028250] <4> ct_logfiles_add: add log file:curr=/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/home/track_journal.v1.dat.p7nvc.vore.xxxx.stat
e.nj.us_1423522807 prev=/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/home/track_journal.v1.dat
18:00:51.500 [1028250] <4> ct_cat_open: successfully open cur /usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/home/track_journal.v1.dat.p7nvc.vore.xxxx.stat
e.nj.us_1423522807 and pre /usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/home/track_journal.v1.dat .
18:00:51.929 [385146] <16> bpbkar: ERR - read server exit status = 14: file write failed
18:00:51.934 [557204] <4> bpbkar main: real locales <en_US en_US C en_US en_US en_US>
18:00:51.934 [557204] <4> bpbkar main: standardized locales - mnt_lc_messages <en_US> mnt_lc_ctype <en_US> mnt_lc_time <en_US> mnt_lc_collate <en_US> mnt_lc_numeric <en_US>
18:00:51.935 [557204] <2> bpbkar main: create backup id list: p7nvc.msss.xxxx.xxx_1423004406,p7nvc.msss.xxxx.xxx_1423436405,p7nvc.msss.xxxx.xxx_1423350005,p7nvc.msss.xxxx.xxx_
1423263605,p7nvc.msss.xxxx.xxx_1423177205,p7nvc.msss.xxxx.xxx_1423112443,p7nvc.msss.xxxx.xxx_1423026136,p7nvc.msss.xxxx.xxx_1423004409,p7nvc.msss.xxxx.xxx_1423436408,p
8mvc.vore.xxxx.state.nj.us_1423350008,p7nvc.msss.xxxx.xxx_1423263607,p7nvc.msss.xxxx.xxx_1423177208,p7nvc.msss.xxxx.xxx_1423112789,p7nvc.msss.xxxx.xxx_1423026100,p7nvc.vore.tre
as.state.nj.us_1423004408,p7nvc.msss.xxxx.xxx_1423436406,p7nvc.msss.xxxx.xxx_1423350006,p7nvc.msss.xxxx.xxx_1423263606,p7nvc.msss.xxxx.xxx_1423177206,p7nvc.vore.xxxx.state.nj.
us_1423112787,p7nvc.msss.xxxx.xxx_1423026097,p7nvc.msss.xxxx.xxx_1423004410,p7nvc.msss.xxxx.xxx_1423436409,p7nvc.msss.xxxx.xxx_1423350009,p7nvc....
18:00:51.935 [557204] <4> bpbkar main: For accelerator, we will be using both the 'mtime' and 'ctime' for incrementals, the current configuration is : 0, will be set to 1.
18:00:51.935 [557204] <2> logparams: bpbkar -r 2678400 -ru root -dt 86701 -to 0 -bpstart_time 1423523110 -clnt p7nvc.msss.xxxx.xxx -class p7nvc.msss.xxxx.xxx -sched incremental -st INCR -bpst
art_to 2500 -bpend_to 2500 -read_to 6000 -stream_count 11 -stream_number 7 -jobgrpid 29891 -blks_per_buffer 512 -tir -tir_plus -use_otm -fso -b p7nvc.msss.xxxx.xxx_1423522809 -kl 28 -fscp -S nbv40w1
-storagesvr nbv40w1 -bidlist bid@p7nvc.msss.xxxx.xxx_p7nvc.msss.xxxx.xxx_1423522809 -use_ofb
18:00:51.979 [385146] <4> ct_logfiles_deleteall: unlink log files /usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/proc/track_journal.v1.dat.p7nvc.vore.xxxx.
log.020915 (27%)
...skipping...
18:00:50.481 [868588] <4> bpbkar: INF - Estimate:-1 -1
18:00:50.497 [385146] <16> bpbkar: ERR - Cannot write to tir_info_file, fd is NULL.
18:00:50.498 [385146] <16> bpbkar: ERR - bpbkar FATAL exit status = 14: file write failed
18:00:50.498 [385146] <4> bpbkar: INF - EXIT STATUS 14: file write failed
18:00:50.481 [868588] <4> bpbkar: INF - Estimate:-1 -1
18:00:50.497 [385146] <16> bpbkar: ERR - Cannot write to tir_info_file, fd is NULL.
18:00:50.498 [385146] <16> bpbkar: ERR - bpbkar FATAL exit status = 14: file write failed
18:00:50.498 [385146] <4> bpbkar: INF - EXIT STATUS 14: file write failed
18:00:50.621 [868588] <2> bpbkar add_to_filelist: starting sizeof(filelistrec) <128>
18:00:50.621 [868588] <4> bpbkar: INF - Throttle duration required = 512 usec.
18:00:50.623 [868588] <4> bpbkar: start to backup filelist / ,nb_fscp_enabled is 1
18:00:50.623 [868588] <4> bpbkar: INF - Processing /
18:00:50.623 [868588] <4> bpbkar: filelist /,is folder traclog path is /_track_log_root_.
18:00:50.623 [868588] <2> ct_cat_close: close current track journal
18:00:50.623 [868588] <2> ct_cat_close: close previous track journal
18:00:50.623 [868588] <2> mkdir_p: make dir(/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/_track_log_root_) result(0)
18:00:50.623 [868588] <2> ct_set_fs_ops: successfully call ct_set_fs_ops
18:00:50.623 [868588] <2> ct_tmpfile_clean: open dir(/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/_track_log_root_), result(1)
18:00:50.623 [868588] <2> ct_tmpfile_clean: keep file(.)
18:00:50.623 [868588] <2> ct_tmpfile_clean: keep file(..)
18:00:50.623 [868588] <2> ct_tmpfile_clean: keep file(track_journal.v1.dat)
18:00:50.638 [868588] <4> ct_cat_open: waiting to lock current track log
18:00:50.638 [868588] <4> ct_cat_open: lock current track log: success
18:00:50.638 [868588] <4> ct_logfiles_add: add log file:curr=/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/_track_log_root_/track_journal.v1.dat.p7nvc.vore
.msss.xxxx.xxx_1423522805 prev=/usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/_track_log_root_/track_journal.v1.dat
18:00:50.638 [868588] <4> ct_cat_open: successfully open cur /usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/_track_log_root_/track_journal.v1.dat.p7nvc.vore
log.020915 (12%)
8:11:10.206 [1089596] <16> bpbkar: ERR - Cannot write to tir_info_file, fd is NULL.
18:11:10.206 [1089596] <16> bpbkar: ERR - bpbkar FATAL exit status = 14: file write failed
18:11:10.206 [1089596] <4> bpbkar: INF - EXIT STATUS 14: file write failed
18:11:11.334 [1089596] <16> bpbkar: ERR - read server exit status = 14: file write failed
18:11:11.348 [1089596] <2> ct_cat_close: close current track journal
18:11:11.348 [1089596] <2> ct_cat_close: close previous track journal
18:11:11.360 [1089596] <4> ct_logfiles_deleteall: unlink log files /usr/openv/netbackup/track/nbv40w1/nbv40w1/p7nvc.msss.xxxx.xxx/p7nvc.msss.xxxx.xxx/proc/track_journal.v1.dat.p7nvc.vore.xxxx
.state.nj.us_1423523426
18:11:11.360 [1089596] <4> ct_logfiles_deleteall: finish logfiles delete/rename
18:11:11.360 [1089596] <4> ct_manage: state(1) != CT_SUCCESS, returning
18:11:11.360 [1089596] <2> ct_fini: fini is called
18:11:11.360 [1089596] <4> bpbkar: INF - setenv FINISHED=0
23:31:29.157 [1167384] <4> is_excluded: Excluded /p7nvc/AllPreMoveBackups/p7nvc/filesystems-on-p7nvc/usr/ora/1110/product/11.1.0/db_1/.patch_storage/12419384_Jul_11_2011_02_07_14/backup/sysman/admin/emdrep/s
ql/core by exclude_list entry core
23:31:35.486 [1167384] <4> is_excluded: Excluded /p7nvc/AllPreMoveBackups/p7nvc/filesystems-on-p7nvc/usr/ora/1110/product/11.1.0/db_1/.patch_storage/12419384_Jul_11_2011_02_07_14/files/sysman/admin/emdrep/sq
l/core by exclude_list entry core
23:31:39.799 [1167384] <4> is_excluded: Excluded /p7nvc/AllPreMoveBackups/p7nvc/filesystems-on-p7nvc/usr/ora
Aix 6.1 server
NBU client level 7.6.0.3
Thanks for any help/advice
BERemote failed logins using odd credentials
New install of Backup Exec 2014, have installed the Windows Agent on a VMware virtual server. Last 3 nights we have been receiving the following error in the Security Event log:
Event 4625
An account failed to log on.
Subject:
Security ID: SYSTEM
Account Name: <LOCAL SERVER>
Account Domain: <OUR DOMAIN>
Logon ID: 0x3e7
Logon Type: 4
Account For Which Logon Failed:
Security ID: NULL SID
Account Name: <ESX Admin account>
Account Domain:
Failure Information:
Failure Reason: Unknown user name or bad password.
Status: 0xc000006d
Sub Status: 0xc0000064
Process Information:
Caller Process ID: 0x140
Caller Process Name: C:\Program Files\Symantec\Backup Exec\RAWS\beremote.exe
Network Information:
Workstation Name: <LOCAL SERVER>
Source Network Address: -
Source Port: -
Detailed Authentication Information:
Logon Process: Advapi
Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Transited Services: -
Package Name (NTLM only): -
Key Length: 0
Here's what I don't get. The <ESX Admin Account> that is being used is setup properly in the Backup Exec Media Server under logon accounts, but it is used for accessing the ESX hosts themselves, not the guest servers. That account exists only on the ESX servers, not as a user account in the local domain. The guest server should be using a domain account setup specially just for the Backup Exec system and that is also setup as the default logon account. This server, <LOCAL SERVER>, is actually backed up at 7PM and displays no errors and is successful. The above error occurs about 4 hours later and no active jobs involving this server are running. I have seen this same issue on 3 other servers, doing the exact same thing. Any ideas on what is going on?
Note that this issue is not preventing jobs from being ran, but it is showing up in our security logs as a failed attempt for a "special access account" and our auditors see this and are questioning why it is happening. This is the reason I need this resolved. Thanks.
Adding more library drives, how to handle max concurrent write drives
I am looking to add 4 more drives to my 6 drive LTO-5 library. Every policy is using SLP to handle the duplication from disk to tape. The problem we are having now is the backlog of duplication jobs waiting to get to a tape drive.
Looking through the documentation, it mentions something to the effect that if maximum concurrent write drives is set to one less than the physical number of drives, the other tape drive will be available for restores and other non-backup operations (for example, to import, to verify and to duplicate backups).
My question is about the "duplicate backups" part. If SLP is to be the only thing accessing the tape drives, will it use the 9 drives I set, or the 1 drive I didn't?
My reasoning behind leaving the 1 drive of 10 out of the loop with the max concurrent write setting is for the sake of cataloging and restores. Will this idea work? Will giving Catalog/VaultCatalog its own drive (sort of) get anything out of sync if there are still duplication jobs running or queued?
We are running 7.6.0.3 on Windows.
Thank you for your input.
Getting error on a drive
Hello All,
I am getting below error message for a drive in /var/adm/messages
Feb 12 07:12:07 p02danbs03 avrd[23319]: [ID 229023 daemon.notice] Reservation Conflict status from OC9940B_a1p02d14 (device 6)
Kindly suggest as backups are getting failed on this drive randomly with error 84.
BE2014 - Sharepoint-Unable to Query Team database meta data
Hello,
I'm trying to backup a Sharepoint Farm in BE2014. I recieve a "Unable to Query Team Database meta data error. I read several articles for this error for BE2010 as well as BE2012. One article http://www.symantec.com/connect/issues/backup-sharepoint-2013-fails-unable-query-team-database-meta-data-workflow-service-applicatio
Said to "deselect the Workflow service Application Proxy" in BE2012 as there is no resolution or plans to fix. Has there been a fix for this error in BE2014 or a better workaround than just unselecting the item?
Thanks,
DASFTP
NetBackup Finds Library, But Not Drives Attached using NDMP
Running NetBackup 7.1 on Windows Server 2008 R2. I have an HP StorageWorks MSL4048 tape library with 2 drives installed. These are attached with fibre to 2 fibre channel ports on the back of my NetApp FAS2240. When I run sysconfig -t and sysconfig -m on the NetApp, it shows the 2 tape drives, and the library. When I try to discover these devices in NetBackup using the storage wizard, it only finds the robotic library, and lists the 2 drives as "unused element address". I have seen this happen once before, but that was a probelm with an old NetApp that did not have tape config files for the newer LTO5 drives. This is a new NetApp that already has the HP and IBM LTO5 drive tape config files. And the NetApp detects them successfully anyway. Any ideas why NetBackup does not? We verified that ndmp is ON, and the account name and password should be good, otherwise it would detect nothing at all. I can even unplug the fibre, and run the wizard again, and it detects nothing, so I know that is all working correctly.
backup exec 2014
Dear, good afternoon
I have a problem with backup exec 2014 that is in OS windows server 2012 standard, the AD-DC is failing, my question is when there is problem with the AD-DC affects domain credentials in exec backup that is authenticated, I can not support remains so looking good while passed an 16 minutes and can not find the folder for backup and send it to the tape. I appreciate your support.
How to delete tape backups in BE 2014
It appears that with BE 2014, we can no longer delete backups which exist on tape media.
Under Backup Sets, we used to be able to right click on a backup and select delete. With BE 2014, the delete option is no longer available. In addition, for a tape backup, we no longer appear to have options to reset the expiration date, select retain or to expire the backup -- these options are grayed out.
How are we no supposed to delete tape backups? In our case, we have a truck load of old backups on tapes from older no longer existing tape drives. The new drives can read the tapes, but not write to them. How do we get rid of these backups? Hopefully I'm missing something here.
Please Symantec bring back the delete option!
Over an hour delay between backing up servers
I've only noticed in the last week but our BE media server has an hour delay before starting it's back up once the previous server has finished. I have tried re-arranging servers on which tape it get's backed up to over our dual library but hasn't made any difference infact has made things even slower.
Job 1 backs up to drive 1 at 11:45pm nightly
Job 2 which has the BE media server backs up to drive 2 at 11:50pm nightly.
SQL On Disk copies of databases -- any drawbacks to non-snapshotting?
Hi there -
I'd like some guidance in this area I'm about to explain as I am a little confused. For some of our SQL servers, it would be ideal to have copies of the databases exist on the local server, and so I'd like to use the option to place SQL database copies on disk when the jobs run. I noticed that you can't do this without first disabling snapshotting.
Its that last part that gets me. To me, it means going to Advanced Open File section, and unselecting Use Snapshot Technology. So, wouldnt that apply to filesystem too? I don't want to do that. So I suppose you could then say, just seperate out the SQL database backup from the SQL server, and do them seperate. Okay - well what is the drawback to doing I guess what's called streaming SQL backups then, if they are not snapshot SQL backups? Am I loosing anything by doing this?
Perhaps this would be less complicated if I just configure the SQL Management Studio to make DB copies periodically on disk, and leave BE out of it?
I'd appreciate any advice.
Thanks!
Best practice for backup job
I am trying to wrap my head around the backup types and retention using backup exec 2014. I am backing up some Hyper-V VM's from the host and only want the backups for 3 days. How would I accomplish this?
It seems I'd want a full backup every 3 days but I'm not sure what to do for the incremental or perhaps differentials nor how long to keep each type.
Any other ideas?
Any good reading on really understanding the backup types and retention to accomplish various backups?
Backup exec 2014
Dear, good day
I have backup exec 2014 sp1 server and database sql 2014 with OS windows server 2012 R2 standard, I have deployed the agent is installed, created scheduled tasks, and every time it runs runs the backing few megs then the execution is cut back, and so this two weeks ago and more than reviewed and can not find a solution.
thank you for your help.
Backup Exec 12.5 Changes Media Pool Assignment Automatically
Hi,
I am running Backup Exec 12.5 on a Windows 2008 Server. I use two media pools; "Onsite" and "Offsite". Occasionally during backup jobs, Backup Exec will move a tape from "Onsite" to "Offsite" or vice versa, enabling it to put a backup job on a tape from the wrong pool. Sometimes it seems like this is done because there are only writable tapes in the other media pool, but this is not always the case.
Regardless of why it is moving tapes from one pool to another, I would like Backup Exec to pause or error out rather than re-assign tapes to the other media pool. How do I make it do this?
Thanks for any help!
Netbackup Migration Issue
Hi All,
hoping someone can point me in the right direction.
I am trying to migrate my master netbackup server from solaris on sparc to x86 linux.
I successfully exported the catalog. I installed netbackup on linux. I created the same links/paths as original. I am using exact same hostname.
I go through the catalog recover wizard, and it says it completes successfully.
But... I am missing tons of backups when I try to find them in the gui.
For instance, in the log file for the catalog recovery there is this:
20:38:48 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/
20:38:48 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f-list
20:38:48 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgHeader0
20:38:48 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgRecord0
20:38:49 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgDir0
20:38:49 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgFile0
20:38:49 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgNDMP0
20:38:49 (2.001) /netbackup_db/db/images/asfasitp02/1346000000/catstore/NETAPP-SAP-GPD-HOT_1346480190_FULL.f_imgExtraObj0
20:38:44 (2.001) /netbackup_db/db/images/asfasitp02/1338000000/catstore/NETAPP-SAP-APD-HOT_1338346755_FULL.f_imgNDMP0
20:38:44 (2.001) /netbackup_db/db/images/asfasitp02/1338000000/catstore/NETAPP-SAP-APD-HOT_1338346755_FULL.f_imgExtraObj0
20:38:44 (2.001) /netbackup_db/db/images/asfasitp02/1338000000/catstore/NETAPP-SAP-APD-HOT_1338346755_FULL.f_imgUserGroupNames0
20:38:44 (2.001) /netbackup_db/db/images/asfasitp02/1338000000/catstore/NETAPP-SAP-APD-HOT_1338346755_FULL.f_imgStrings0
But if I do an ls on the directory:
root@chazbs02 { /usr/openv/netbackup/logs/user_ops/root/logs }
> 326 $ ls /netbackup_db/db/images/asfasitp02/1338000000
ls: cannot access /netbackup_db/db/images/asfasitp02/1338000000: No such file or directory
So the log file said it was restoring, but it's not there at the end?
Here is the bottom of the log file from restore:
20:39:31 (2.001) chmod 655 /netbackup_db/db/class/NETAPP-SAP-P01-COLD to reset permissions.
20:39:31 (2.001) chmod 2750 /opt/openv/var/websvccreds/WSL to reset permissions.
20:39:31 (2.001) chmod 2750 /opt/openv/var/global/wsl/credentials/clients to reset permissions.
20:39:31 (2.001) chmod 2750 /opt/openv/var/global/wsl/credentials to reset permissions.
20:39:42 (2.001) INF - TAR EXITING WITH STATUS = 0
20:39:42 (2.001) INF - TAR RESTORED 209596 OF 209596 FILES SUCCESSFULLY
20:39:42 (2.001) INF - TAR KEPT 0 EXISTING FILES
20:39:42 (2.001) INF - TAR PARTIALLY RESTORED 0 FILES
20:39:42 (2.xxx) INF - Status = the requested operation was successfully completed.
20:40:07 INF - Database restore started
Restore started Thu Feb 12 20:40:10 2015
20:40:11 (4.001) Restoring from copy 1 of image created Fri Jan 9 14:00:32 2015 from policy Catalog-Backup
20:40:11 (4.001) TAR STARTED 31364
20:40:12 (4.001) INF - Beginning restore from server chazbs02 to client chazbs02.
20:40:14 (4.001) /opt/openv/db/staging/DARS_DATA.db
20:40:14 (4.001) /opt/openv/db/staging/DARS_INDEX.db
20:40:15 (4.001) /opt/openv/db/staging/DBM_DATA.db
20:40:15 (4.001) /opt/openv/db/staging/DBM_INDEX.db
20:40:15 (4.001) /opt/openv/db/staging/EMM_DATA.db
20:40:16 (4.001) /opt/openv/db/staging/EMM_INDEX.db
20:40:16 (4.001) /opt/openv/db/staging/JOBD_DATA.db
20:40:16 (4.001) /opt/openv/db/staging/NBDB.db
20:40:16 (4.001) /opt/openv/db/staging/NBDB.log.1
20:40:16 (4.001) /opt/openv/db/staging/SEARCH_DATA.db
20:40:16 (4.001) /opt/openv/db/staging/SEARCH_INDEX.db
20:40:17 (4.001) /opt/openv/db/staging/SLP_DATA.db
20:40:17 (4.001) /opt/openv/db/staging/SLP_INDEX.db
20:40:17 (4.001) /opt/openv/db/staging/databases.conf
20:40:17 (4.001) /opt/openv/db/staging/server.conf
20:40:17 (4.001) /opt/openv/db/staging/vxdbms.conf
20:40:17 (4.001) INF - TAR EXITING WITH STATUS = 0
20:40:17 (4.001) INF - TAR RESTORED 16 OF 16 FILES SUCCESSFULLY
20:40:17 (4.001) INF - TAR KEPT 0 EXISTING FILES
20:40:17 (4.001) INF - TAR PARTIALLY RESTORED 0 FILES
20:40:18 (4.001) Status of restore from copy 1 of image created Fri Jan 9 14:00:32 2015 = the requested operation was successfully completed
20:40:18 INF - Server status = 0
20:40:18 (4.xxx) INF - Status = the requested operation was successfully completed.
20:40:23 INF - Database recovery started
20:40:23 WRN - No database backup found in /usr/openv/db/staging for NBAZDB defined in vxdbms.conf
20:40:23 INF - Shutting down databases
20:40:24 INF - Applying transaction log: /usr/openv/db/staging/NBDB.log.1
20:40:24 INF - Applying transaction log: /usr/openv/db/staging/NBAZDB.log.1
20:40:24 INF - Applying transaction log: /usr/openv/db/staging/NBDB.log
20:40:24 INF - Moving database files from /usr/openv/db/staging
20:40:24 INF - Source /usr/openv/db/staging/EMM_DATA.db
20:40:24 INF - Destination /usr/openv/db/data/EMM_DATA.db
20:40:24 INF - Source /usr/openv/db/staging/DBM_DATA.db
20:40:24 INF - Destination /usr/openv/db/data/DBM_DATA.db
20:40:25 INF - Source /usr/openv/db/staging/DARS_DATA.db
20:40:25 INF - Destination /usr/openv/db/data/DARS_DATA.db
20:40:25 INF - Source /usr/openv/db/staging/SEARCH_DATA.db
20:40:25 INF - Destination /usr/openv/db/data/SEARCH_DATA.db
20:40:25 INF - Source /usr/openv/db/staging/JOBD_DATA.db
20:40:25 INF - Destination /usr/openv/db/data/JOBD_DATA.db
20:40:25 INF - Source /usr/openv/db/staging/SLP_DATA.db
20:40:25 INF - Destination /usr/openv/db/data/SLP_DATA.db
20:40:25 INF - Source /usr/openv/db/staging/NBDB.db
20:40:25 INF - Destination /usr/openv/db/data/NBDB.db
20:40:25 INF - Source /usr/openv/db/staging/EMM_INDEX.db
20:40:25 INF - Destination /usr/openv/db/data/EMM_INDEX.db
20:40:25 INF - Source /usr/openv/db/staging/DBM_INDEX.db
20:40:25 INF - Destination /usr/openv/db/data/DBM_INDEX.db
20:40:26 INF - Source /usr/openv/db/staging/DARS_INDEX.db
20:40:26 INF - Destination /usr/openv/db/data/DARS_INDEX.db
20:40:26 INF - Source /usr/openv/db/staging/SEARCH_INDEX.db
20:40:26 INF - Destination /usr/openv/db/data/SEARCH_INDEX.db
20:40:26 INF - Source /usr/openv/db/staging/SLP_INDEX.db
20:40:26 INF - Destination /usr/openv/db/data/SLP_INDEX.db
20:40:26 INF - Creating /usr/openv/db/data/vxdbms.conf
20:40:26 INF - Creating /usr/openv/var/global/server.conf
20:40:26 INF - Creating /usr/openv/var/global/databases.conf
20:40:26 INF - Starting up databases
20:40:31 INF - Database recovery successfully completed
20:40:31 INF - Recovery successfully completed
20:40:42 INF - Catalog recovery has completed.
Does anything look out of place? There are thousands of entires in the log of images that were restored, but they are not there.
I'm also going to attach the entire catalog restore log in case anyone can figure this out.
Unable to sort OpsCenter 7.6.0.4 Week At A Glance report
I'm not able to sort the Week At A Glance report in OpsCenter v7.6.0.4 Analytics.
I should expect it could be sorted by default but it is not, i also do not see any option scrolling through the parameter options.