Quantcast
Channel: SCN: Message List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all 3587 articles
Browse latest View live

Re: how to change seed for identity column?

$
0
0

Mark A Parsons wrote:

 

 

 

Uniqueness is only *guaranteed* if there's a unique, one-column index that consists of the identity column.

 

 

 

Ouch.  I'll assume this was a typo from you as well, since *technically* triggers could be used to enforce uniqueness as well.  Not just "only" unique indexes.  And yes, *technically* those triggers could be disabled, and *technically* indexes could be dropped, blah, blah, blah.  Not really that interesting though imo.


Re: how to change seed for identity column?

$
0
0

Not a typo.

 

There are a handful of ways to bypass triggers leaving ... correct me if I'm wrong ... a unique index as the only way to ensure uniqueness.

 

Sure, if you want to start changing the DDL (eg, drop the unique index) then all bets are off.

JDBC Compatibility with ASE 15.7

$
0
0

Hi, Colleagues.

 

I wonder about jdbc compatibility with ASE 15.7.

 

My customer wants to know about it.

 

Customer is using jconn3.jar with ASE 15.0.3 ESD#4.2 on Sun sparc.
But CT is going to upgrade ASE 15.7 SP122. (ebf 22774)

 

So, CT wondering a compatibility between jconn3.jar and ASE 15.7 SP122.

 

I have checked it as follow link.

Sybase White Paper: Basic JDBC Connectivity for Sybase Databases, Supplement for Sybase Partner Certification Reports, u…


But, I couldn't find any information exactly.

 

If someone knows about that please let me know.

 

In addition, is there any restriction of using 15.0.3 features on ASE 15.7 SP122?

 

 

Thanks in advance.

 

jinkyun.

Re: Is there a getutcbigdatetime()

$
0
0

Yes - We've done that but found it slightly unreliable as difference between utcdate and getdate is often 59 mins and not 60mins.

 

Run this...

select datediff(ms, getutcdate(), getdate()), datediff(minute, getutcdate(), getdate()),

 

and you often get

3599996 ms          59 s

 

In fact if you run

select

        convert(char,getdate(),109), convert(char,getutcdate(),109),

          convert(char,getdate(),109), convert(char,getutcdate(),109)

 

You get

Oct  1 2014  9:21:34:836AM     Oct  1 2014  8:21:34:840AM     Oct  1 2014  9:21:34:836AM     Oct  1 2014  8:21:34:840AM

 

 

We found this to be a solution

dateadd(hour, case when datediff(minute, getutcdate(), current_time()) >= 59 then -1 else 0 end, current_bigdatetime())

 

But should this really be necessary ?

There really should be a getutcdate which will return a bigdatetime.

Re: JDBC Compatibility with ASE 15.7

$
0
0

Hi,

 

It should work but if there is problem we will ask you to upgrade the jConnect. Here is a link for interoperability with ASE and client/driver versions.

SyBooks Online

 

 

Thanks,
Dawn Kim

Re: JDBC Compatibility with ASE 15.7

$
0
0

Thanks for your reply.

Unfortunately, CT wants to use only jconn3.jar version.

As I think, it seems to be worked properly but I'm not sure 100%.

 

Anyway, it needs a test before apply it.

 

Regards.
jinkyun.

Re: JDBC Compatibility with ASE 15.7

$
0
0

Hi,

 

The jconn3.jar is in the jConnect 6 folder which is listed in the above link for the Sybase Online books.

jconn4.jar is in the jConnect 7 folder.

So ultimately the customer will have to test those older applications to the newer ASE. If they run into problems please report them. I might be they have to set a connection property to make sure it works correctly. Here is a list of the jconn3 (jConnect 6) connection properties for further reference: SyBooks Online for jConnect connection properties

Re: How big to make Large IO cache pools ?

$
0
0

Stefan Krause-Lichtenberg wrote:

> and check if you have 'hot tables' - for these tables a new named cache makes sense.

 

There are two different "schools of thought" about this. Let me explain the other, which you may have guessed it's also my own opinion. Hot tables manage to keep their pages in the pools, even when they share the default data cache, simply as a natural consequence of their high activity. Then, named caches are not a requirement to guarantee that their pages stay in memory. Named caches do serve well for a secondary target: reducing spinlocks by distributing access thru more caches, but this goal may also be achieved by cache partitioning.

 

I wouldn't say named caches are a bad idea for hot tables. It's only a matter of accepting the extra effort of keeping many named caches well tuned.

 

Regards,

Mariano Corral Herranz


Re: JDBC Compatibility with ASE 15.7

$
0
0

Just wanted to add that jconn3 version is end of life and has been for some time :

 

End of Life Notice for ADO.NET 1.x and jConnect 6.x End of Life Notice: - Sybase Inc

jConnect 6.X November 30, 2011 (Jconn3.jar)

 

There are various issues in JConnect 6 that can cause errors or wrong data in ASE (and regardless of the version of ASE). So, if they don't currently affect 15.0.3, you will probably be OK against 15.7. But as Dawn states, any issues and you will be asked to upgrade to a current and supported version.

Re: How big to make Large IO cache pools ?

$
0
0

I would agree with much of what you're saying.

 

Managing named caches is a real pain especially considering the elevated roles required to set up caches. (Sybase roles don't really work well in our company where DBAs are a separate off-shored team).

 

So we fix that by binding certain databases to certain caches as we know some databases have the "main" data so we don't want that flushed out.

 

Its just an attempt to give better performance as we know that 25% of the main databases are in caches and 10% of the other databases are in cache.

 

No idea how effective it is though but it seems ok

Re: How big to make Large IO cache pools ?

$
0
0

Without going into a lot of details about your processing requirements, there are several areas you could look at in terms of improving performance ...

 

For monitoring purposes you can look at monDataCache, monCachePool, monCachedObject, monOpenObjectActivity.  I'd also suggest running sp_sysmon at continuous 15-minute intervals and then reviewing the cache and disk io subsections. [Sorry, I'm not aware of an easy way to analyze objects down to the pool level.]

 

Keep in mind that by the time you slice-n-dice a pool based on # of partitions, APF %, wash size, overhead ... you may be left with a cachelet that's too small for the optimizer to consider using. (See Jeff Tallman's presentation for a good breakdown of cache configuration issues.)   If the optimizer is choosing to ignore your large IO pools then you could be a) obtaining poor query plans and/or b) flooding your disk subsystem with a lot of smallish IOs (as opposed to fewer largish IOs).

 

If your 30K reports need to process 100GB of data/indexes and you only have 50GB of cache then you're obviously going to need to read some data from disk. ("Duh, Mark!" ?)  The question then becomes one of how to limit the number of times a chunk of data/index has to be read from disk, eg, if 5K of your reports process the same 10GB of data over and over then it would make sense to run these 5K reports together so as to re-use as many in-cache pages as possible ... as opposed to spreading those 5K reports out over several hours and risk later reports having to re-read pages from disk.  Obviously (?) this type of report scheduling is going to require detailed knowledge of the various reports and the chunks of data required by each report.

 

I often come across clients who are running with default configurations for 'disk i/o structures' (2048) and 'max async ios per engine/server' (2048 - depends on ASE/OS version).  On today's hardware these settings are typically too small and thus introduce an artificial limitation on the number of IOs the dataserver can have outstanding at any one time.  I typically recommend bumping these numbers up to 10K-20K (or more), with the objective being to push any IO bottlenecks out to the OS.  (Yeah, this will take some trial and error to see what works well in your environment.)

 

If day-to-day operations are of a OLTP nature, while your after-hours/weekend reports are of a DSS nature, you may need to consider dynamically reconfiguring your caches before/after running your reports in order to make best use of your memory.

 

------------------------

 

For a 1TB dataserver with 30K reports I'm assuming you have a decent number of engines (eg, 16, 20, 32), and if you've partitioned your (default) data cache to support this largish number of engines then there's a good chance that 8GB/128K pool may be too small ... especially if your reports are processing large volumes of data.

 

Again, without a lot of details (about the 30K reports, about engine counts, about cache configurations), I'm probably going to be leaning towards adding more memory to the 128K pool ... possibly tweaking the APF percentage ... and then using sp_sysmon to evaluate the effectiveness of this configuration.  And yes, unfortunately, this is going to require some trial and error ... because your configurations will depend heavily on the details of your processing requirements.

 

Also keep in mind that changes in your cache configurations can lead to good/bad changes in your query plans, so it may also become necessary to tune some queries if you see degraded performance after making configuration changes to your cache(s).

Re: How big to make Large IO cache pools ?

$
0
0

Thanks for the information.

 

I'm trying to get the sysmon information but unfortunately sysmon can only be run by off-shored teams - it make performance monitoring very tricky but will keep trying. (Sybase roles aren't great for companies with out-sourced DBAs).

 

Thanks for the information of disk i/o structures and the others. I've found a few parameters we can increase and will add that to the list.

 

We only have 24 engines are the moment - but are buying a new machine with 48.

Would you expect to tweak the APF high or lower ?

 

Thanks again.

 

BTW, the caches have 32 partitions.

disk i/o structures are 12,000

but maxa sync i/os per engine are only 2000

Re: how to change seed for identity column?

$
0
0

Switch on the identity_insert option for the mytab table.

set identity_insert mytab on


delete the row which have value of  5000101 and re-insert it with the proper value 101

 

I hope it will help you ..


Re: How big to make Large IO cache pools ?

$
0
0

There's really no relation between number of engines and APF.

 

The effectiveness of APF (and whether to increase/decrease) is going to be determined by your queries, fragmentation of your data, cache configurations and disk subsystem abilities.

 

I'd be looking at sp_sysmon to assist with determining the effectiveness of APF.

 

---------------------

 

Doubling the number of engines?

 

Hmmmm ... are there plans to double the workload, or is this an attempt at improving the performance of the current workload?  If the latter ... I hope someone's already done some aggressive P&T work (query tuning, process redesign, server/cache configurations, etc) and determined that increasing cpu counts will be more effective than adding memory and/or SSDs ...

ASE certification on Windows

$
0
0

Hi, Colleagues.

 

I wonder ASE certification on Windows Platform.

My CT is using ASE 15.7 ESD#4.2.

CT wants to know whether that version has been certified on Windows 2012 R2.

 

I tried to search about information but I couldn't find any official document.

 

Does anyone know about it?
If you know and having a related document please let me know.

 

Thanks in advance.

 

jinkyun.


Re: How big to make Large IO cache pools ?

$
0
0

> There's really no relation between number of engines and APF.

I know the two sentences weren't meant to be related.

 

> The effectiveness of APF (and whether to increase/decrease) is going to be determined by your queries,

> fragmentation of your data, cache configurations and disk subsystem abilities.

 

This is a bit of an issue since these things change over time.

Fragmentation will vary.

We also use a SAN and the performance of SAN will vary according to the other 200 apps in the company on the SAN.

 

> Hmmmm ... are there plans to double the workload, or is this an attempt at improving the performance of

>  the current workload?  If the latter ... I hope someone's already done some aggressive P&T work

> (query tuning, process redesign, server/cache configurations, etc) and determined that increasing

>  cpu counts will be more effective than adding memory and/or SSDs ...

 

The workload has doubled in the last 6 months and likely to double in the next 6 months - we're been agressively turning for the last 2 years. (I worked on tuning Sybase systems previously along with Sybase people). I'm checking as much information as I can but obviously as I can't run sp_sysmon myself its very difficult. (wish Sybase would allow sysmon to be run by non sa's - or is this possible).

 

We have SSD's on the SAN but obviously they're shared. It interesting watching the performance of SQL as the SAN moves the data up the tiers. (run sql - takes 4 mins, flush cache, re-run sql takes 2.5 mins, flush cache, re-run sql takes 1m).

 

The current machine is 12 cores x2 threads - which is certainly underpowerted. However, the main reason for the move is to increase the cache to 500Gb so we get less SAN reads.

Re: How big to make Large IO cache pools ?

$
0
0

Hi Mike

 

Picking up on your comment on running sp_sysmon without sa permissions, I wanted to point out a feature in 15.7 (I know you are not using this at the moment) that makes it easy to run system procedures without system roles.

 

From 15.7 ESD2 , there is a feature to address this called execute as owner. From that version, most system procedures are created with execute as owner, so you can grant execute to a non-privileged user on a (system) procedure :

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc01672.1572/html/sec_admin/BABIFJGG.htm

 

(this assumes valid login and user added to sybsystemprocs)

 

grant execute on sp_sysmon to my_user

go

sp_displaylogin test1

go

-- no roles displayed

 

Log in as test1, execute sp_sysmon and it works.

 

 

For earlier versions, there is this solution but of course this requires sa_role to set it up :

http://www.sypron.nl/grant_sa.html

 

 

HTH

 

Bart

Re: How big to make Large IO cache pools ?

$
0
0

Thanks - we're due to upgrade to Sybase 15.7 in a few weeks time so we can try this out then.

Re: How big to make Large IO cache pools ?

$
0
0

Sorry late chiming in on this discussion, but this is not a straight forward question.    First, on the topic of permissions and monitoring, all you need to be able to look at MDA data is mon_role.   If you have that, then the rest is just a matter of baselining over several days/weeks.

 

The best metric of how much of a pool is used is to look at monCachePool.   The key metrics there are PagesTouched and AllocatedKB.  Okay, so the first problem is they have two different units - pages vs. KB - but you simply need to divide the PagesTouched by your server's pagesize to get the answer in KB.   PagesTouched is one of the few counters in MDA that is a *current* value - meaning you don't have to do a delta between samples to get the actual value.

 

If PagesTouched/AllocatedKB <80%, I would start reducing the large IO pool.

 

BTW - PagesTouched doesn't work for log caches....they will always be "touched"....  Sizing a log cache should take into consideration the number of PhysicalReads by the checkpoint process or on the log device itself - as well as any process that does a dump tran.

 

Now, then - how much of which table is in which buffer pool.....wellllll....you can't really tell that (unless you only have 1 buffer pool) and it likely can be fairly dynamic.  monCachedObject does report cache consumption by table/index - but doesn't split it by pool.   However, it is interesting to graph and see if there are sudden increases in cache consumption - which indicates that a lot of data had to be read - and possibly could use a large IO pool.  It also is a good indication of cache volatility which is much better for cache tuning than simplistic cache hit rates.

 

One thing to remember, is that a large IO will not be used if 1 page (or more) of an extent is already in cache - so the sequence of queries could change where things are in cache from day to day.   However a good place to look is monOpenObjectActivity.    In earlier ASE's - at least there were columns tracking APFReads - but unlike the APFReads in monDeviceIO, the APFReads in monOpenObjectActivity included APF's satisfied from cache as well as physical reads....but it is more likely that APF's would hit a large IO, so you could probably compare APF's vs. LogicalReads and get a rough idea.   Later versions of ASE (15.7 ESD #4 and higher I think) added columns such as IOSize1Page, IOSize2Pages, IOSize3Pages, and IOSize4Pages.....which would tell you precisely how many large reads (and writes) were done.

 

You also have to keep some other things in mind....APFs are not always large IO's, but large IOs are quite often due to APFs.    APF's can be any size physical read and are issued when ASE knows it is going to need the next page (or set of pages) even before it reads the data.   For example, on a standard B-tree index traversal, an APF makes no sense as the index node values will have to be evaluated to determine which branch to take.   However, if a index leaf scan is being used - then ASE knows it can submit the APF's for all the index leaf pages.  

 

Also remember that APF's are not always physical reads.   Remember, the query optimizer considers how much data is in cache as a costing factor, but it doesn't know if the data will still be in cache at execution time - and if the specific rows (and if all the rows) necessary are in cache.   So it may choose an APF strategy.   Pages that are already in memory still may have an APF - but it will simply be found in cache.

 

One area that sp_sysmon has that MDA does not - is it does report how many APF's required physical IOs vs. satisfied from cache as well as why APF's were denied.

 

However, this also takes a bit of thinking.   When is a large IO pool used????

 

1) APF reads for pool sized contiguous chunks from disk (note - APF's can be from memory or disk).   This usually is due to a index leaf scan, a range scan or a table scan.

 

2) Certain DML operations (e.g. select/into will use a large IO pool to create the NEW table

 

3) Bulk load operations (e.g. minimally logged bcp, SQL merge inserts in 16.x)

 

4) Certain maintenance operations (e.g. update stats, reorg, dbcc, etc.)

 

In other words - not as much as you would think.   For example, the last couple of days, I have been running a job to measure impact of various configs on create index in ASE 16 - I thought I would see a lot of large IOs (due to implied table scan necessary for most index creations) - but instead am seeing a lot more single page reads - on the order of 10's of millions of single page reads vs. 10's of thousands of large IOs. 

 

Additionally, sometimes the presence of APF's indicates that there may be query issues - e.g. not so effective indexes or table scans.   Rather than increasing the large IO pool size to accommodate them, it is better to find the queries and fix them.   Of course, some queries legitimately need to read contiguous pages (e.g. index leaf scans and range scans), but ideally, we would like to minimize the number of rows scanned to just those necessary to materialize the result.   Yes - it could be argued that increasing the large IO pool to help bad SQL will help speed things up - but often there is a price - and that price is less of other data in cache (as it will likely be using small IO pools due to single page reads) which means other queries will run slower.   So, you need to balance the impact of tuning for someone's bad SQL (or bad indexing or bad QP processing) on other possibly more critical queries.

Re: How big to make Large IO cache pools ?

$
0
0

Thanks that certainly agrees with much of what we've found by trial and error.

 

 

Considering our default data cache


 

IOBufferSizeAllocatedKBPagesTouchedPagesReadCacheName
16k31744000     7,312,309  98,242,928default data cache23.04
128k10240000        479,160  71,024,280default data cache4.68


The PagesTouched/Allocated shows the 16k pool a lot more than the 128k and year the pages read from each cache is very similar 98m vs 71m.

So I guess we're reading a lot of pages from the large IO pool but very few of them.


Thanks again



Viewing all 3587 articles
Browse latest View live