Quantcast
Channel: SCN: Message List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all 3587 articles
Browse latest View live

Re: C# BulkCopy only sometimes supports unsigned bigint fields

$
0
0

> - is the problematic column created as 'null' or 'not null'? what happens if you change the nullability?


The column is created as nullable. I'll test it as not null (I suspect it will work)


> - are you using a C# program that utilizes the bulk copy libraries ?

C# ADO AseBulkCopy()


> - what is a sample data row that fails during bcp in?

This row fails

100000,1,1,1,,0


> - what happens if you use the INSERT command to insert a row, bcp the row out, and then bcp the row in?  success or failure?


Yes - command line BCP works fine.



Re: C# BulkCopy column mappings don't work with IDataReader

$
0
0

Will try an open a report - this will need go through another team to raise this.

 

We're using SDK SP122.

 

If you look at the code in

AseBulkCopyBusinessBulk(IDataReader reader, AseBulkCopy bulkCopy, int batchSize)

 

and

 

AseBulkCopyBusinessBulk(DataTable table, DataRowState rowState, AseBulkCopy bulkCopy, int batchSize)

 

you'll see they're very different in dealing with column mappings.

Re: C# BulkCopy column mappings don't work with IDataReader

$
0
0

Hi Mike,

 

Okay – opening an incident will be great as my team will handle this. If we could get the sample code – include ASE version, table DDL, that sort of thing plus error messages.

 

 

We made some changes after SP122 but not sure they will affect this one. Plus, if the two signatures are different than a sample to demonstrate the difference, based on your developers' experience will be extremely helpful.

 

 

If they have the opportunity to test current 15.7 SP130 release that would be helpful as well.

 

 

Cheers,

 

-Paul

 

 

 

 

 

 

 

Re: C# BulkCopy column mappings don't work with IDataReader

$
0
0

Thanks have sent it on - will get you the case number when I get it.

 

> If they have the opportunity to test current 15.7 SP130 release that would be helpful as well.


We've downloaded SP130 and tested them but with our test cases and we get lots failures. It would take too much time to go through the errors one by one - so we're giving up on SP130.

How long have the been out ?


Should I expect 15.7 drivers to work with a 15.5 database ?

Re: C# BulkCopy column mappings don't work with IDataReader

$
0
0

Hi Mike,

 

15.7 SP130 came out in August or thereabouts. It should work okay with ASE 15.5 - please open a new discussion and perhaps post one or two of the errors. There is no major difference in the AseBulkCopy class between the two other then some fixes.

 

However, without knowing the coding details there is not much I can do here.


Cheers,

-Paul

Re: Can't kill process created by Sybase Java

$
0
0

Hi,

 

I am not sure what OS your ASE resides on or where you are executing your java application from or what version of the jconn4.jar you are using?

 

Try the following and see if this can clean anything up. Change your packet size to 1024 in the java connection string.
Is the java connection calling a remote server with a stored procedure?
Could it be the remote server to the ASE keeping the connection alive?

Do you see anything in the ASE log related to pci memory does this need to increased?

 

Thanks,
Dawn Kim

What's reason cause the insert delay in tempdb and how to improve it?

$
0
0

I have ASE 12.5. The tempdb is on ramdisk.

Then I have a SP with a query like

select * from mytab....

...

 

When I run this SP, only 11 rows for the result. It took about 2 seconds.

Then I change the SP to keep result in a named temp table like:

 

Create table #tmptab...

 

Insert into #tmptab

select * from mytab....

 

then run it again, it's took about 6 seconds. big difference. As only 11 rows and everything should be done in memory, The speed should not slower than that. How to figure it out and improve the performance for this case?

Re: What's reason cause the insert delay in tempdb and how to improve it?

$
0
0

Post the complete output from the following:

 

=========================== submit to 'isql'

sp_helpdb tempdb

go

sp_helpdevice

go

 

set showplan on

set statistics io,time on

go

 

-- execute each proc twice; once to compile,

-- second time to re-use in-memory query plan

 

exec sp_1 -- does the basic select * from mytab

go

exec sp_1 -- does the basic select * from mytab

go

exec sp_2 -- does the insert/select against #tmptab

go

exec sp_2 -- does the insert/select against #tmptab

go

===========================

 

NOTE: Would recommend posting all output as a *txt attachment to maintain format and make easier to read.


Re: What's reason cause the insert delay in tempdb and how to improve it?

$
0
0

thanks, Mark. Here is the results with your guide:

 

sp_helpdb tempdb

go               

    name db_size owner dbid created status 

1 tempdb                      2003.0 MB sa                       2 Oct 14, 2014       select into/bulkcopy/pllsort, trunc log on chkpt, ddl in tran, abort tran on log full 

                

    name attribute_class attribute int_value char_value comments 

1 tempdb buffer manager cache binding 1 tempdb data cache [NULL] 

                

    device_fragments size usage created free kbytes   

1 master       3.0 MB data only           Apr  9 2011 11:25AM            1866   

2 tempdb_data    1500.0 MB data only           Apr  9 2011  1:27PM         1529940   

3 tempdb_logs     500.0 MB log only             Apr  9 2011  1:27PM not applicable   

                

              

1 log only free kbytes = 509930                                   

 

 

 

 

sp_helpdevice

    device_name physical_name description status cntrltype device_number low high 

go        

19 tempdb_data /db/oe/tempdb/tempdb_data.dat                 special, dsync off, physical disk, 1500.00 MB 2 0 3 50331648 51099647 

20 tempdb_logs /db/oe/tempdb/tempdb_logs.dat                 special, dsync off, physical disk, 500.00 MB 2 0 4 67108864 67364863         

 

 

 

 

---Without insert

exec sp_1 -- does the basic select * from mytab

      

    

        

      

1 Total actual I/O cost for this command: 0.   

2 Total writes for this command: 0   

3   

4 Execution Time 0.   

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms.   

        

      

1 10/15/2014 1:00:49.613 PM   

        

      

1 Total actual I/O cost for this command: 0.   

2 Total writes for this command: 0   

3   

4 Execution Time 0.   

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms.   

6 Total actual I/O cost for this command: 0.   

7 Total writes for this command: 0   

8   

9 Execution Time 0.   

10 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 3 ms.   

        

    status 

1 11150 C. 

2 7795 D. 

3 13618 DIR 

4 1445 EXE 

5 48 GMD 

6 55 IMD 

7 66471 M. 

8 374 NMD 

9 2383 REG 

        

      

1 Table: tab1 scan count 4, logical reads: (regular=124929 apf=0 total=124929), physical reads: (regular=0 apf=0 total=0), apf IOs used=0   

2 Table: tab2 scan count 204728, logical reads: (regular=235956 apf=0 total=235956), physical reads: (regular=0 apf=0 total=0), apf IOs used=0   

3 Table: tab2 scan count 1, logical reads: (regular=5 apf=0 total=5), physical reads: (regular=0 apf=0 total=0), apf IOs used=0   

4 Table: Worktable1  scan count 5, logical reads: (regular=151 apf=0 total=151), physical reads: (regular=0 apf=0 total=0), apf IOs used=0   

5 Total actual I/O cost for this command: 722082.   

6 Total writes for this command: 0   

7   

8 Execution Time 0.   

9 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 1796 ms.   

        

      

1 10/15/2014 1:00:51.413 PM   

        

      

1 Total actual I/O cost for this command: 0.   

2 Total writes for this command: 0   

3   

4 Execution Time 0.   

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms.   

        

 

 

 

 

---With insert

exec sp_2 -- does the basic select * from mytab

 

 

      

    

1 Total actual I/O cost for this command: 0. 

2 Total writes for this command: 0 

4 Execution Time 0. 

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms. 

      

    

1 10/15/2014 1:01:35.833 PM 

      

    

1 Total actual I/O cost for this command: 0. 

2 Total writes for this command: 0 

4 Execution Time 0. 

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms. 

6 Total actual I/O cost for this command: 0. 

7 Total writes for this command: 0 

9 Execution Time 0. 

10 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms. 

11 Table: #table1______02000450014068719 scan count 0, logical reads: (regular=18 apf=0 total=18), physical reads: (regular=0 apf=0 total=0), apf IOs used=0 

12 Table: tab1 scan count 1, logical reads: (regular=121953 apf=0 total=121953), physical reads: (regular=0 apf=0 total=0), apf IOs used=0 

13 Table: tab2 scan count 204728, logical reads: (regular=229304 apf=0 total=229304), physical reads: (regular=0 apf=0 total=0), apf IOs used=0 

14 Table: tab2 scan count 1, logical reads: (regular=5 apf=0 total=5), physical reads: (regular=0 apf=0 total=0), apf IOs used=0 

15 Table: Worktable1  scan count 1, logical reads: (regular=29 apf=0 total=29), physical reads: (regular=0 apf=0 total=0), apf IOs used=0 

16 Total actual I/O cost for this command: 702618. 

17 Total writes for this command: 0 

18 

19 Execution Time 31. 

20 SQL Server cpu time: 3100 ms.  SQL Server elapsed time: 3226 ms. 

      

    

1 10/15/2014 1:01:39.060 PM 

      

    

1 Total actual I/O cost for this command: 0. 

2 Total writes for this command: 0 

4 Execution Time 0. 

5 SQL Server cpu time: 0 ms.  SQL Server elapsed time: 0 ms. 

 

The difference is:

11 Table: #table1______02000450014068719 scan count 0, logical reads: (regular=18 apf=0 total=18), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

 

Does this cause performance difference?

how to use temp table in inside a procedure called?

$
0
0

Suppose I have a stored procedure sp1 which call another procedure sp2 like:

 

create proc sp1

as

begin

 

    CREATE TABLE #table1 (......)

    Exec sp2

 

end

 

then I want to use #table1 in sp2, like

 

create proc sp2

as

begin

 

....

   delete from #table1

   INSERT INTO #table1

......

end

 

How to pass #table1 to sp2? Anywhere to do this?

Re: how to use temp table in inside a procedure called?

$
0
0

The trick to setting this up is to create the #temp1 table before creating sp2 to avoid any "object not found" resolution error while the query tree is being created.  The drop the table.

When sp2 is called by sp1, it will automatically re-resolve it's #temp1 entry to the #temp1 created in sp1.

 

  CREATE TABLE #table1 (......)

go

 


create proc sp2

as

begin

 

....

   delete from #table1

   INSERT INTO #table1

......

end

go

drop table #table1
go

 

create proc sp1

as

begin

    CREATE TABLE #table1 (......)

    Exec sp2

end

Re: how to use temp table in inside a procedure called?

$
0
0

thanks, got it. Bret. One more question. If sp2 call another sp3, if this #table1 available for sp3 if I doing thing like:

 

sp1.

 

  CREATE TABLE #table1 (......)

go

 

create proc sp3

as

begin

....

   delete from #table1

   INSERT INTO #table1

......

 

end


create proc sp2

as

begin

 

....

   delete from #table1

   INSERT INTO #table1

   exec sp3

......

end

go

drop table #table1
go

 

create proc sp1

as

begin

    CREATE TABLE #table1 (......)

    Exec sp2

end

Re: how to use temp table in inside a procedure called?

How to create a composite index?

$
0
0

Most of time, I have no problem to create composite index.

 

Try to create a composite index on a table and got following error message:

 

Number (1903) Severity (16) State (1) Server (Myserver) 600 is the maximum allowable size of an index.  Composite index specified is 602 bytes.

 

It is because one column with 600 length(univarchar(600).

 

I have query with condition like:

 

where col1 =1 and col2 like 'abc%'

 

col2 is univarchar(600). I try to create index(col1+col2). Without proper index, query has bad performance.

 

How to resolve this problem?

Re: How to create a composite index?

$
0
0

Hm.  That doesn't seem right - a univarchar(600) would count as 1200 bytes.  Perhaps it is a univarchar(300)?

 

There isn't a lot of wiggle room here.

 

You can create such an index if the  column can be redefined to be a univarchar(299) or a varchar(598).

 

You could migrate to a server that had a 4K or larger page size; the max width of an index depends on the page size:

 

set switch on 3604

go

dbcc serverlimits

go

[...]

            Item dependent on page size                             : 2048    4096    8192    16384
-----------------------------------------------------------------------------------------------------------

Server-wide, Database-specific limits and sizes

[...]

   Max user-visible index row-size                                   : 600     1250    2600    5300

[...]

 

 


Re: best size of procedure cache size?

$
0
0

Hi Kent,

 

Your sp__monitorconfig 'procedure cache size' outputs indicates the followings:-

  • current usage of 26+%
  • HWM of 787,437 x2K pages, which is 52+%
  • re-used count of 746,136, which is 50%

 

The ASE expert, whom you consulted, is right to say that re-used count should be as close to zero as possible.

 

Do you have Statement Cache enabled (check 'statement cache size')?

 

If you have, it may be too small.  ASE would not exceed 'statement cache size' configured when caching new incoming statements even though there may be free space available in the Procedure Cache (with HWM of 52+%, there was 47+% spare at the time), instead, ASE would remove the oldest cached statements from Statement Cache ( and their Light Weight Procedures (LWPs), + query plans ) to free up space - this is probably the reason for the re-used count of 50%.

 

Best regards,

Raymond

Re: How to create a composite index?

$
0
0

Thank you, Bret. A mistake. the column is varchar(600). The pagesize of ase is 2K.

 

Here is the output from my ase:

56            Item dependent on page size                             : 2048    4096    8192    16384    
85  Max index row-size, including row-overheads                       : 650     1300    2700    5400      
86  Max user-visible index row-size                                   : 600     1250    2600    5300      

db error caused by dbcc memusage

$
0
0

issue dbcc memusage and system is frozen, then got following error

 

00:00000:00000:2014/10/16 09:50:23.91 kernel  secleanup: time to live expired on engine 1

00:00000:00009:2014/10/16 09:50:23.93 kernel  Terminating the listener with protocol tcp, host myserver, port 4101 because the listener execution context is located on engine 1, which is not responding.

00:00000:00009:2014/10/16 09:50:23.93 kernel  ************************************

00:00000:00009:2014/10/16 09:50:23.93 kernel  curdb = 1 tempdb = 2 pstat = 0x200

00:00000:00009:2014/10/16 09:50:23.93 kernel  lasterror = 0 preverror = 0 transtate = 1

00:00000:00009:2014/10/16 09:50:23.93 kernel  curcmd = 0 program =

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cd0847 upsleepgeneric+0x437()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cadcae listener_checkagain+0x1be()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cad938 listener+0x338()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cce388 kpstartproc+0x48()

00:00000:00009:2014/10/16 09:50:23.93 kernel  end of stack trace, spid 10, kpid 1638425, suid 0

00:00000:00009:2014/10/16 09:50:23.93 kernel  Started a new listener task with protocol tcp, host myserver, port 4101.

 

then system repair itself within 1-2 minutes.

dbcc memusage is fine on another staging server.

what's the possible reason for this? hardware?

Which one has better performance: like, charindex, patindex

$
0
0

in a query where condition clause. to check if string a contain b, I have a few options with ASE 12.5.4:

 

a like '%b%'

charindex(b,a)>0

patindex('%b%', a)>0

 

 

which one has better performance?

question on dbcc message

$
0
0

I am running dbcc checkdb and got following message for 2 tables:

 

Checking mytab1: Logical pagesize is 2048 bytes  ---long time

Checking mytab1: Logical pagesize is 2048 bytes

The total number of data pages in this table is 390.

The total number of pages which could be garbage collected to free up some space is 336.

The total number of deleted rows in the table is 3.

The total number of pages with more than 50 percent insert free space is 6.

Table has 2580 data rows.

 

Checking mytab2: Logical pagesize is 2048 bytes

The total number of data pages in this table is 82265.

The total number of pages which could be garbage collected to free up some space is 5431.

The total number of deleted rows in the table is 763.

 

They are small table, but dbcc took long time on it. Does it mean something wrong with these 2 tables?

How to figure out and resolve the problem if any?

Viewing all 3587 articles
Browse latest View live