Quantcast
Channel: Symantec Connect - NetBackup PureDisk - Discussions
Viewing all 122 articles
Browse latest View live

PureDisk cause to netbackup`s 48 error

$
0
0
I need a solution

 

Hello,

Optimized duplications from Production Storage server to DR storage server via SLP fail with status 84, but backups to both Storage Pools complete successfully. 

 

Job Details:

21/11/2011 14:22:51 - Critical bpdm(pid=5672) sts_copy_extent failed: error 2060029 authorization failure 
21/11/2011 14:22:51 - Critical bpdm(pid=5672) image copy failed: error 2060029: authorization failure 
21/11/2011 14:22:51 - Error bpdm(pid=5672) cannot copy image from disk, bytesCopied = 18446744073709551615 
21/11/2011 14:22:51 - Critical bpdm(pid=5672) sts_close_handle failed: 2060022 software error 
21/11/2011 14:23:00 - Info pdmedia01(pid=5672) StorageServer=PureDisk:pdrnbpd0a; Report=PDDO Stats for (pdpd0a): scanned: 2 KB, stream rate: 0.00 MB/sec, CR sent: 0 KB, dedup: 100.0%, cache hits: 0 (0.0%)

 

At the same time i saw that in one of Puredisk`s nodes (SPC)  the Q-Process faild and after rerun he dont starting.

 By this site i followed to this page:

http://www.symantec.com/business/support/index?page=content&id=TECH175441

and I saw that in pdwfe.log this error:  Cannot create PDDO Task (id: 51000) job, Storagepool FULL

so by the site that i worte here, i clear the /var/log (was 70%) and restart the pd.

the duplicate run ok, but the q-process still doesnt work on SPC node. (on SPA and SPB work fine).

after 12 hours the problem happend again and the duplicate again faild with 48 error, the /var/log is ok by the way (40% each node).

anyone can help?  out pd ver is 6.6.3

 


How to create user id on Puredisk's Web UI via putty?

$
0
0
I need a solution

How to create user id on Puredisk's Web UI via putty? Can this also be? Usually you can create user ids via the web interface.

After upgraded to Puredisk 6.6.4 backups stuck at 52 percent complete

$
0
0
I need a solution

Hi,

I upgraded my Puredisk server from 6.6.3 to 6.6.4 because of some backups in error, nut now all my backup stuck after 52 percent complete?

Job steps : "Import PO-List on the MetaBase Engine"

The end of the job log

 *** Supportability Summary ***
jobid                 = 20172
jobstepid             = 81567
agentid               = 27
hostname              = SMXP002
starttimejobstep      = 2012-Dec-26 11:09:32 CET
endtimejobstep        = 2012-Dec-26 11:10:05 CET
workflowstepname      = Data Backup
status                = SUCCESS

[2012-Dec-26 11:10:06 CET] *** Start: MBImport ***

???

MSDO & PureDisk

$
0
0
I need a solution

Hello everybody!!

 

Do you know what is exactly the difference between MSDO and PDDO? I read that in the future PureDisk won't be supported by Symantec and some articles about how to migrate PDDO to MSDO... I have this doubt because I was working with a PureDisk environment but some TSE call servers MSDP Server.... so I'm confused... do you have any idea?

backups jobs always getting cache hits: 0 (0.0%)

$
0
0
I need a solution

Hi there,

I am experiencing this problem with all backup jobs, no matter if they are using server deduplication or client-side deduplication

Dedup seems ok with very good ratios but cache hits is always 0.

Here you are an example:

01/17/2013 07:15:33 - Info bak002 (pid=19972) StorageServer=PureDisk:bak002; Report=PDDO Stats for (bak002): scanned: 153346694 KB, CR sent: 617422 KB, CR sent over FC: 0 KB, dedup: 99.6%, cache hits: 0 (0.0%)

Surely is a configuration problems as it happens in all my clients but till now I haven't found the cause

My environment:

  • 1 Master 7.5.0.4 Linux
  • 1 Media 7.5.0.4 Linux
  • Clients Windows/Solaris/linux 7.5.0.4

Any idea?

Best Regards

fakeFPCheck reports corruption while reclaiming deduplication space

$
0
0
I need a solution

I am reclaiming deduplication storage space manually on a 5200 appliance. I am following the process in TECH180659.

I see a lot of these type of messages:

January 21 08:58:24 INFO [1082194240]: fakeFPCheck: DO 53c8979cafabfcc54c82cf7900fa465a is corrupt
January 21 08:58:26 INFO [1082194240]: fakeFPCheck: DO 546d2e925f2a6325c52884556187e102 is corrupt

And running the garbage collector doesn't seem to clean them up as they show up again the next passof running CR queue processing ( /usr/openv/pdde/pdcr/bin/crcontrol --processqueue).

Is there some way to clean up these corrupt fragments and reclaim the storage space?

Thanks,
Wayne
 

Puredisk issue - no scheduled backup job are running

$
0
0
I need a solution

Hello all,

 

I met a reccurent issue with PureDisk architecture version 6.6.3a : no scheduled backup job are running

détails of differents logs :

 

slapd.log :

Jan 29 11:17:35 nbupd-som slapd[6118]: connection_read(18): no connection!

Jan 29 11:25:30 nbupd-som slapd[6118]: connection_read(22): no connection!

Jan 29 11:30:16 nbupd-som slapd[6118]: connection_read(18): no connection!

pdweb-error.log :

 

[Tue Jan 29 11:15:31 2013] [error] [client ::1] client denied by server configuration: /opt/pdweb/htdocs/ [Tue Jan 29 11:15:31 2013] [error] [client ::1] client denied by server configuration: /opt/pdweb/error

 

Controller.log :

 

Tue Jan 29 2013 11:15:05.800240 INFO  (1074796864): Connection from 10.25.84.10 activated : pdagent 46 Tue Jan 29 2013 11:15:11.224889 INFO  (1077176640): Timeout reached for connection from 10.25.60.10 Tue Jan 29 2013 11:15:11.241392 INFO  (1077176640): Timeout reached for connection from 10.25.75.10 Tue Jan 29 2013 11:15:11.271076 INFO  (1077176640): Timeout reached for connection from 10.25.54.10 Tue Jan 29 2013 11:15:17.147818 INFO  (1075853632): Connection from 10.25.60.10 activated : pdagent 15 Tue Jan 29 2013 11:15:55.717928 INFO  (1076382016): Connection from 10.25.54.10 activated : pdagent 8 Tue Jan 29 2013 11:16:00.980828 INFO  (1074268480): Connection from 10.25.75.10 activated : pdagent 5 Tue Jan 29 2013 11:16:11.496577 INFO  (1077176640): Timeout reached for connection from 10.25.70.10 Tue Jan 29 2013 11:16:11.512855 INFO  (1077176640): Timeout reached for connection from 10.25.75.19 Tue Jan 29 2013 11:16:11.581968 INFO  (1077176640): Timeout reached for connection from 10.25.12.10

 

agent.log :

Tue Jan 29 2013 11:15:30.299310 ERROR (1081665856): The webservice returned an error :  Cannot retrieve next job step for Agent nbupd-som.groupe.sa.colas.com (25000000): Network error while retrieving the response.

Tue Jan 29 2013 11:15:30.299459 ERROR (1081665856): WebService call failed Tue Jan 29 2013 11:15:50.370577 INFO  (1086687552): Execute scheduled task: next jobstep Tue Jan 29 2013 11:17:30.733441 ERROR (1081665856): Network error during webservice call: Operation timed out after 120018 milliseconds with 0 bytes received

 

=> the only workaround we found is to reboot to PureDisk server ; after that, the jobs are launched correctly

Is it a kown issue ? if yes is there any fix for that ?

Thanks to all by advance for your help.

Regards,

Florent.

 

Obtaining protected data size with PureDisk

$
0
0
I need a solution

I have a PureDisk environment running 6.6.0.3, without a central SPA, and need to be able to obtain the front-end protected size of the data that I am backing up. If I had a SPA with central reporting enabled, then I could run teh Enterprise License report, but since I do not have central reporting enabled, I don't seem to have the report available.

 

How do I determine how much front-end data I am protecting?


SLP duplication jobs

$
0
0
I need a solution

Hello. Had a server where the duplication jobs (per the SLP) were not finishing or erroring out. I have a ticket open for this, but I generally have better luck / faster responses with the forums -

After cancelling out these 'hung' jobs, I restarted services on media / master, etc. I am currently pointing this client directly to the storage unit where backup jobs are running fine. No SLP currently.

My question -

C:\PROGRA~1\Veritas\NetBackup\bin\admincmd>nbstlutil stlilist -image_incomplete

V7.5.0 I ahfs1_1361257201 ahslp 2
V7.5.0 C ahmedia-hcart3-robot-tld-2 1 1
V7.5.0 I ahfs1_1361343601 ahslp 2
V7.5.0 C ahmedia-hcart3-robot-tld-2 1 1
V7.5.0 I ahfs1_1361430000 ahslp 2
V7.5.0 C ahmedia-hcart3-robot-tld-2 1 1

These images are still awating duplication. Seems I've read that they will eventually catch up? If so, does it matter if this client is pointing towards the SLP or the storage unit? Does the client have to have the SLP active / specified in the policy in order for these images to catch up?

 

 

8409651
1361827253

Puredisk 6.5 Error DR backup of Storage/data of the CR

$
0
0
I need a solution

To Puredisk Experts/Guru out there. Any help how to resolve this issue I encountered during a DR backup? It always stops on the CR (content router) part of the backup.

2013-Feb-27 17:20:33 PHT] *** Start: DRBackupCRData ***
[2013-Feb-27 17:20:33 PHT] Full Backup
[2013-Feb-27 17:20:33 PHT] Reading storage directory
[2013-Feb-27 17:20:33 PHT]
*** Start: DRNBUDisasterRecovery.backupCRData ***
[2013-Feb-27 17:20:33 PHT]
[17:20:33] Type of backup: full
[2013-Feb-27 17:20:34 PHT]
[17:20:34] Backup of the Content Router data in progress...
[2013-Feb-27 17:20:34 PHT]
[17:20:34] Saving extended attributes
EXIT STATUS 1: the requested operation was partially successful
EXIT STATUS 1: the requested operation was partially successful
EXIT STATUS 1: the requested operation was partially successful
*** Error Message ***

severity: 6
server: 1494000000
source: DRBackupCRData_DRBackupCRData
description:
Backup Failed : Unable to backup the CR Data
*** End ***

*** Supportability Summary ***
jobid = 673089
jobstepid = 3376465
agentid = 1494000000
hostname = xxxPDOS2
starttimejobstep = February 27, 2013, 5:20 pm
endtimejobstep = February 28, 2013, 11:05 pm
workflowstepname = BackupData
status = ERROR

PureDisk 6.6.3a plus v6 bundle causes slow full PDDO backups

$
0
0
I need a solution

We upgraded a 4 CR environment from 6612 recently and since that time have had slow full PDDO backups.  We upgraded a nearly identical environment last year to 663a + v5 bundle and did not see this behavior.  Has anyone else seen this behavior?

PureDisk, tape and backup strategy

$
0
0
I need a solution

Hello,

 

Netbackup permit to realize the following backup strategy :

  • Weekly full backup done on tape
  • Incremental backup done on a dedup pool

 

I have several questions about this :

Does Netbackup can find modified files between full backup on tape and first incremental backup deduplicated ?

A Full backup deduplicated is not necessary ?

 

Thanks for your answers.

Regards.

Steve

 

Media Server Deduplication Pool with NBU7.1 possible?

$
0
0
I need a solution

Hallo.

Is the Media Server Deduplication Pool  with NBU7.1 possible?

I don't find the Wizzard tp create this Media Server Deduplication Pool  in Netbackup 7.1.0.4.

 

thx

 

MSDP Volume down - Error Code 2074

$
0
0
I need a solution

Hello,

I have configured a MSDP-Pool on a RHEL MediaServer and STU as Target.
Netbackup Version 7.5.
A Test Backup to this STU first is running fine and all Backup-Policy with this STU as Target are running successfully.
Suddenly, a error message with Netbackup Code 2074 - Volume down.
All policy failed.
Status of the Deduplication as follows:

configured Storage Pool Size : 25TB
used space : 1,5TB
available space : 23,5TB

why is the Dedup-Volume down ?

what kind of data are is written to the path /var/log/puredisk/pdplugin.log   ?
Over 88GB of Data was written to this path.
Why ?

Can anybody explain ?

 

kind regards
Gunnar
 

deduplication

$
0
0
I need a solution

Hi ALL,

we are working on using deduplication technology in our environment but want to have below things clear:Just the basic idea?

  1. Before NBU 7.x,How do we do deduplication? we use symantec netbackup pure disk?
  2. Symantec NetBackup PureDisk is a product that provides a complete backup and deduplication environment. It can be used as a stand-alone (PureDisk writes data to the storage pools) or as a back-end for NetBackup. How it is use as standalone and as backend?
  3. now in 7.1 we have MSDP.This is also using Pure disk?
  4. What is this MSDP?And this PDDO?

Puredisk system state job never runs

$
0
0
I need a solution

Hi All, new to Puredisk. For a long time now the system state and services backup never runs.

It starts with "Prepare Backup Client Side" - and says "Aborted by Watchdog"

Job 49846: Aborted by Watchdog window exceeded
 *** Supportability Summary ***
jobid                 = 49846
jobstepid             = 230403
agentid               = 8
hostname              = xxxxxxxxxxxx
starttimejobstep      = March 19, 2013, 4:04 am
endtimejobstep        = March 19, 2013, 4:04 am
workflowstepname      = PrepareBackup
status                = ABORTED_BY_WATCHDOG

 

I have checked the policy and its is set to:

Escalate warning after 6 hours

Escalate error and terminate after 5 days.

Any ideas why it is failing?

fakeFPCheck reports corruption while reclaiming deduplication space

$
0
0
I need a solution

I am reclaiming deduplication storage space manually on a 5200 appliance. I am following the process in TECH180659.

I see a lot of these type of messages:

January 21 08:58:24 INFO [1082194240]: fakeFPCheck: DO 53c8979cafabfcc54c82cf7900fa465a is corrupt
January 21 08:58:26 INFO [1082194240]: fakeFPCheck: DO 546d2e925f2a6325c52884556187e102 is corrupt

And running the garbage collector doesn't seem to clean them up as they show up again the next passof running CR queue processing ( /usr/openv/pdde/pdcr/bin/crcontrol --processqueue).

Is there some way to clean up these corrupt fragments and reclaim the storage space?

Thanks,
Wayne
 

How to change MSDP Path?

$
0
0
I need a solution

 

Hi All,

Master Server:nbu 7.5.0.5

I config MSDP path "c:\dedup", the disk does not  have enough space.

so i want to change the path(c:\dedup)to "d:\dedup".  pls check Attachment.

 

TKS

NBU dimensioning puredisk DSU

$
0
0
I need a solution

Good morning all,

  today i need to dimension a new MEDIA for a bunch of servers that are going to be added to the one.

I'm talking of 10 servers someone with SAP db, below detailed:

2.35T of raw space (maximum disk capacity summed for all the servers).

370G Sap db+Arch logs (actually running).

Around (foresight) 500G of new DB (Sharepoint installation for Documentum)

 

Sap db+arch logs runs everyday.

Disks backup 1Full+6inc.

Sharepoint will run too everyday (i don't know if it will use mssql process or another specific one that can use inc+full).

Retention 2Weeks for all the backups.

I'm thinking about initial 6T of /DSU (PUREDISK with dedup) in order to stay quite for some time, but i would like to have a real and precise measrument to calculate it prior to stay far from reality.

 

Thank you.

 

Kind Regards,

 

NTFS cluster blocking factor for MSDP database and MSDP data volumes?

$
0
0
I need a solution

Hi Forum - I've read elsewhere that is is recommended to format the NFTS volumes that will be used to house the MSDP database and MSDP data with a cluster size of 64 KB (i.e. 128 traditional 512 byte sectors).

However if if look at a slightly populated MSDP server, I see most files are less than 64 KB.

Does anyone have an opinion on this?

 

 

Top folders with files...

  F:\MSDP_Data\history\dataobjects            12

  F:\MSDP_Data\history\errors                 12

  F:\MSDP_Data\history\retention              12

  F:\MSDP_Data\history\segments               12

  F:\MSDP_Data\history\tasks                  12

  F:\MSDP_Data\history\tlogs                  12

  F:\MSDP_Data\log\pddb                       43

  F:\MSDP_Data\queue                          83

  F:\MSDP_Data\data                       26,205

  F:\MSDP_Data\processed                 103,610

Total = 130119

Table of file sizes...

  =           0         75

  >           0     29,708

  >           4     28,285

  >           8     22,436

  >          16     12,058

  >          32      4,585

  >          64     18,802

  >       1,024        879

  >      10,240        836

  >     102,400        220

  >     204,800     12,118

  >     409,600        108

  >   1,024,000          9

Total = 130119
 
 
 

 

Viewing all 122 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>