Quantcast
Channel: Symantec Connect - Backup and Recovery - Discussions
Viewing all 6307 articles
Browse latest View live

Event 1023 Windows cannot load the extensible counter DLL Backup Exec.

$
0
0
I need a solution

I am getting the following events from our newly build server 2012 r2 server with BE 15. I am not sure what is causing this issue, but it appears to be related to BE and performance counters.

Log Name:      Application
Source:        Microsoft-Windows-Perflib
Date:          4/24/2015 11:02:17 AM
Event ID:      1008
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      XXXXXXX
Description:
The Open Procedure for service "BITS" in DLL "C:\Windows\System32\bitsperf.dll" failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
    <EventID Qualifiers="49152">1008</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-04-24T18:02:17.000000000Z" />
    <EventRecordID>3405</EventRecordID>
    <Correlation />
    <Execution ProcessID="0" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>XXXXXXX</Computer>
    <Security />
  </System>
  <UserData>
    <EventXML xmlns="Perflib">
      <param1>BITS</param1>
      <param2>C:\Windows\System32\bitsperf.dll</param2>
      <binaryDataSize>4</binaryDataSize>
      <binaryData>02000000</binaryData>
    </EventXML>
  </UserData>
</Event>

Log Name:      Application
Source:        Microsoft-Windows-Perflib
Date:          4/24/2015 10:53:43 AM
Event ID:      1023
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      XXXXX
Description:
Windows cannot load the extensible counter DLL Backup Exec. The first four bytes (DWORD) of the Data section contains the Windows error code.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
    <EventID Qualifiers="49152">1023</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-04-24T17:53:43.000000000Z" />
    <EventRecordID>3403</EventRecordID>
    <Correlation />
    <Execution ProcessID="0" ThreadID="0" />
    <Channel>Application</Channel>
    <Computer>XXXXXX</Computer>
    <Security />
  </System>
  <UserData>
    <EventXML xmlns="Perflib">
      <param1>Backup Exec</param1>
      <binaryDataSize>4</binaryDataSize>
      <binaryData>7E000000</binaryData>
    </EventXML>
  </UserData>
</Event>


Deduplication & Optimized duplication - multiple jobs for one server?

$
0
0
I need a solution

Hi There,

I have a number of servers that i want to back up with BE2014, using the deduplication add-on. I will also be adding a stage to duplicate certain backups to an offsite, shared dedupe store.

I have a query about using multiple jobs for the same backup server.

Say i create job A,(consisting of Full and Incrementals),  and this job backs up one server to the primary dedupe store, and this will be duplicated to another dedupe store offsite

Then, i go along and create job B, with different retention settings (also backing up the same server, to the same dedupe store), and this is also duplicated to an offsite dedupe store.

Will job B go off and create a completely new batch of files (eg full backups) on the primary dedupe store, or will everything be completely deduped and no duplicate of files created (, because it can 'see' all of the full backups that were previously created via Job A?

If new backups will be created, will the same thing happen in the offsite dedupe store, and the duplicate jobs will also create new backups in the offsite store?

Basically, i will be creating one job for daily/weekly, and the weekly will be duplicated offsite. I will be creating a seperate montly job, with suitable retention, and this will have a duplicate to send it offsite. I will also be creating seperate quarterly and annual jobs with suitable retention for them also.

Is this a good approach, and will this all work together and deduplicate well?

Thanks in advance for your advice

Why we set robot path to one of the drive

$
0
0
I need a solution

Why we set robot path to one of the drive in tape library and how we can check on which drive that path is set from master server cmd on AIX. And what will happen if that drive gets faulty and what we needs to do. Do we need to reset that path to other working drive meanwhile it gets replaced and do we need to configure the drive again or anything else we can do.
Robot is under master server control on Aix.

Looking for steps of command for Aix platform rather than gui interface.

0

BE2014 Deduplication - how many concurrent jobs do you run, what spec is your backup server?

$
0
0
I need a solution

Hi

I have a couple of quick questions about the dedupe store concurrent jobs setting (for a standard dedupe store running on the media server, with no external devices and no OSTs).

What do people normally leave the setting at? It's looking like i'm going to be running quite a few jobs at once, but i'm hesitant to perform more than 3 jobs at once as it might slow all the jobs down to much

I have BE running as a VM, with 2 x dual core processors, with 32GB of RAM. If i double this to 8 cores in total, will it help me run more concurrent jobs efficiently?

Thanks in advance for your opinions

BE 14.1 Job Estatus Queued

$
0
0
I need a solution

im using BE 14.1 SP2 and last hotfix applied, with a server 2012R2, and Local Disk storage. Suddenly Job holds in queued and stay forever until manually cancel. Storage is online and active.

The Software was working fine and made around 12 good backup. I already delete all jobs and create again, delete servers, and rebuild the database.

The only thing i cant do it is erase and create again the storage, Catastrophic failure showed when i try.

any suggestion

pantalla backup.PNGevent log.PNG

System Drive Backups are creating/editing System Restore files

$
0
0
I need a solution

Auto or manual,  incremental or full backups of my system drive (C:) (has system protection turned on) is creating and editing the system restore files in the C:\System Volume Information directory.  No restore points show as available restore points and this activity is chewing up disk space.  Why is this being done and how do I turn it off?

Image 001.jpg

Image 002.jpg

‎4/22,/2015 ‏‎12:51 PM = Creation of "1st point after deleting all" restore point

4/22/2015 ‏‎5:00:50 PM = Auto Incremental  of C: drive

4/‎24/2015 ‏‎9:46 PM  = Manual Incremental of C: drive

‎4/23/2015 ‏‎5:00:58 PM = Auto Incremental of C: drive

So you can see where SSR is creating and editing the system restore files.  It is chewing up unnecessary disk space and I cannot see how this would be useful for any kind of SSR image restore???  I would like to disable this "feature".

Thanks for your help -- David

Oracle RAC backup

$
0
0
I need a solution

Hi All,

Please calrify, how to configure oracle RAC backup, provide me steps to be followed.

How to scratch/clean an appliance...

$
0
0
I need a solution

I have a v2.5.2 N5220 appliance which I need to decommission, and later re-deploy.

It is/was the 4TB model with no disk shelves.

It was previously configured as a master/media with an MSDP pool (no advanced disk).

I need to ensure, and be absolutely sure, that any old backup data cannot be scavenged.

I'm confortable with re-imageing via IPMI.

So, I'm looking for guidance on how to be sure that no old backup data can be scavenged.

1) Does anyone have any tips/notes/pointers/advice?

2) Do I need to somehow reformat the storage?

3) Or does re-imaging to factory state somehow permanently render all old MSDP data storage disk blocks useless?

4) Is there a documented process?

Thanks.


Replication Director Restore

$
0
0
I need a solution

Hi All,

I have implemented NetBackup 7.6.0.4 replication director with NetApp FAS 7-Mode arrays 8.2.1. Master is AIX, but Media is a Win 2008R2.

I'm using a Snapshot then backup strategy (Snap on the primary volume - with expire after copy setting - then a Snapvault to the secondary array with a 4 weeks retention).

The volume protection and provisionning is working and I can also see SV replication working.

As I was not able to see files on the secondary (replicated) snapshots in NBU B,A&R, I enabled the "Index from Snapshot". (guess this is the purpose of this operation). But when it ran after the replication, I had an error.

Info nblbc(pid=0) done. status: 71: None of the files mentioned in the file list exist or may not be accessible

When I looked on the Job Overview, NBU is trying to index a share that is built on the fly with Snapshot name:

BACKUP \\SOURCE\replicator$\ USING \\DESTINATION\NBU_Share_21752823C7EAEEA8411E4AEE067EBC3F70000_SOURCE_vol_test_replic_NBU_NAS_2015-04-27_1137+0200_HOST_MEDIASERVER\

Thing is, shares with "+" can't be accepted and should not work... So how possible it is to work? Is there a place on NBU to adapt how shares are created? I know snapshots names come from Dataset configuration in OnCommand, but here it looks more to be a NBU setting.

Thanks for your help.

Renaud

how to replicate to remote backup device

$
0
0
I need a solution

hi,

can anyone tell me,

i want to take a backup from HPUX host to storage unit with NBU 7.6

then i need to copy the backup to another storage unit.

can you tell me the steps to do it

Media and images movement from old master to new

$
0
0
I need a solution

Hello Geeks !

We have one old master server running on 7.5.0.6 and we no longer using it to take backups ,it used to backup on 4-5 clients and have like 500 odd media in its EMM.

Now our new master server runs on same OS solaris X64 and netbackup 7.5.0.6 ,what I am looking for is to move the consolidate the images of clients on old master server to new master server.

I will copy db/images of clients from old master to new master and vmadd all 50 media ? 

Questions here are:

1) would the media expire on their due data if i simply vmadd in our new master?

2) do i need to create the volume pools that were in the old master or can simply put them in any read only volume pool as we don't intent to use those media for backups .

PS: We are not looking to import all the media.

Please advise.

VM snapshot backup with backup LAN

$
0
0
I need a solution

Hi Team, 

little confuse , so just wanted to verify. Please correct if I am wrong.

For VM server backups, If I create backup LAN (10gbps) and use the snapshot based backup policy , selects the NBD mode, the backup will run via backup LAN and will not touch managment LAN.

e.g   - 

server name (Manag LAN) - jupiter.xyz.com ( 10.1.1.1)

server Name (Backup LAN)  - jupiter-bkp.xyz.com (10.1.2.1)

Master and Media server are also on 10.1.2.x (backup LAN range)

while creating snaphot based policy , I will select jupiter-bkp.xyz.com , so that data transfer can happen on backup LAN.

is my thinking correct ?

Backup fails with 23 error code after writing more than 1 TB of data

$
0
0
I need a solution

Hi Everyone,

I'm having the exact same issue as reported here:

http://www.symantec.com/connect/forums/backup-fail...

After writing more than 1TB of data, the backup will fail with an error code of 23.  The master server and the client reside on the same subnet and the traffic is not flowing through a firewall, it's all local to the switch, iptables has no rules on the client side, and the master server has no firewall enabled.  This is happening with mutliple linux clients.  The master server is Windows Server 2008r2, netbackup 7.5.0.7.

According to the link contained in that article (https://support.symantec.com/en_US/article.TECH194...) This was resolved at 7.5.0.5.  I'm at 7.5.0.7 and I'm having the issue.  I've opened a case with support, but they have been unable to solve the issue.  They were persistent that the issue was network related.

After reading the forumt today, I've disabled the Checkpoint feature and will re-test to see if that work around works.

Any help is appreciated.

Thanks,

jason

BMRLAUNCHER.EXE The memory could not be read.

$
0
0
I do not need a solution (just sharing information)

Recently spent months working on an issue with support before engineering found out and mentioned it is a known issue that is not documented.

Using a 7.6.0.2 SRT would fail to restore a physical server using BMR.  Had to upgrade my SRT and iso to use 7.6.0.3 after which the issue was resolved and I was able to restore the server as expected. 

2015-04-24_13-25-16_0.png

How do I move the SQL database for Backup Exec to another drive?

$
0
0
I need a solution

Hi,

I've been searching the forums for how to do this and did not come up with much. My drive that hosts the Backup Exec SQL database is filling up my OS drive and I want to move it off of that drive onto a dedicated array for Backup Exec. How do I accomplish this? Can someone point me in the correct kb (if one exists) or provide me with steps?

Current setup:

  • OS: Server 2008 R2
  • Backup Exec 2012
  • SQL Express 2005 (installed by Backup Exec)

Thanks


Very large file share...other solutions?

$
0
0
I need a solution

Netbackup Master and Media Servers running Windows 2008r2, version 7604 (one media server running 7504), to DataDomain

Hi all,

I have a very large Netapp file share (its our primary fileshare, which also contains all of our user's home directories, etc), and it is imperitive that this gets backed up. The full backup is 13TB, containing approximately 20 million files, and its been a thorn for quite a while, and just getting worse as it grows. My current method for backing this up is NDMP, as that used to allow a backup within an acceptible window. But, NDMP has proven to be very inconsistent lately, ranging from about 20mb/s to 80mb/s. The last incremental backup of this data was 2.7tb and took 58 hours to complete! People make heavy changes to this data daily, and I've been bitten because the backup has been running while folks have changed things, so no restore is available.

I am able to also use Accelerator, by pointing the backup to a cif, and running ms-windows policy type on it. But, Accelerator doesnt seem to do much good. The job details do indicate that it is enabled, but it cannot use change jounal for this (unsupported for non-local volumes). The benefit of this method is that I can break the job into multiple streams, create an individual policy for each data set, etc, but each one runs so slowly, it negates any benefit.

I'm not wanting to revert back to adding a Synthetic schedule into the mix, but I'm willing to try it.

Are any of you dealing with such issues, and can tell me what works best for you? Not that it will be best for our environment (details of which I'll gradly provide), but I've tried multiple things, tuning paramenters, etc, and may need a new direction here.

Thank you!

Todd

Netbackup 7.6 support for vsphere/vcenter 6.0

$
0
0
I do not need a solution (just sharing information)

Hi,

I just need a quick verification does Netbackup 7.6.0.1 and/or 7.6.0.3 support vsphere/vcenter 6.0?

Thanks,

Tim

Need information on Implemention VADP based backups - NBU 7.6.0.1

$
0
0
I need a solution

Hello Everyone, I have few queries regarding implementation of VMware based backups. Our NBU version is 7.6.0.1 and We have capacity based license. I have query regarding VMware backup access host.

Do i need a dedicated physical windows servers to be as Backup access host or can i make use of exisitng Solaris media server as a VMware backup access host.

Also would like to know if using virtual machine as backup access host instead of physical server , will it have any performance impact. Please assist.

Unable to restore

$
0
0
I need a solution

Hi,

I have backup Exec 2012 and when i attemp to restore, i get the following error message:

the supplied datetime represents an invalid time for example when the clock

I select the backup point set and select the folders i want to restore but then this error appears.

Can you please advise?

Its is extremely urgent.

Regards,

SSR Management Solution - Last zufällig verteilen

$
0
0
I need a solution

Hallo,

ich habe eine Frage zu der Option Last zufällig verteilen (Min).

Ich habe eine Backuprichtlinie erstellt mit der Option "Last zufällig verteilen" = 60 min. Jeden Samstag 0 Uhr soll eine Wiederherstellungspunktsatz (Basis) erstellt werden. Allerdings betrifft die Backuprichtilinie 5 PCs (Windows 7 32bit).

Die Richtlinie wird auch an die Clients übertragen, jedoch steht unter "Nächste Ausführung" bei jedem! Client 0 Uhr. Anscheinend greift die Option "Last zufällig verteilen" nicht.

Oder wird das bei jedem Client angezeigt, obwohl die Clients den Backupjob trotzdem zu unterschiedlichen Uhrzeiten starten?

Wie oft wird eigentlich eine geänderte Backuprichtlinie in der Management Solution Webplattform zu den Clients synchronisiert?

Vielen Dank für die Hilfe!

Gruß

BlackSun

Viewing all 6307 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>