473,386 Members | 1,621 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

User defined data type used in stored procedure parameters

I have several stored procedures with parameters that are defined with
user defined data types. The time it takes to run the procedures can
take 10 - 50 seconds depending on the procedure.
If I change the parameter data types to the actual data type such as
varchar(10), etc., the stored procedure takes less that a second to
return records. The user defined types are mostly varchar, but some
others such as int. They are all input type parameters.
Any ideas on why the stored procedure would run much faster if not
using user defined types?

Using SQL Server 2000.

Thanks,
DW

Jul 23 '05 #1
13 10373
Why do you use UDTs for parameters? I've no idea if there's a performance
difference but UDT rules and defaults don't get applied to variables so I
see little or no benefit in using them.

UDTs, rules and defaults are backwards compatibility features. Going
forward, it is highly recommended that you use constraints instead - much
more powerful and easier to use.

--
David Portas
SQL Server MVP
--
Jul 23 '05 #2
David Portas (RE****************************@acm.org) writes:
Why do you use UDTs for parameters? I've no idea if there's a performance
difference but UDT rules and defaults don't get applied to variables so I
see little or no benefit in using them.

UDTs, rules and defaults are backwards compatibility features. Going
forward, it is highly recommended that you use constraints instead - much
more powerful and easier to use.


I'm sorry, but I think this is very poor advice.

First, user-defined data types are by no means a deprecated feature. (In
SQL 2005, Microsoft has added proper DDL syntax to create them.) User-
defined data tyepes are a not wholly satisfactory replacement for domains,
but it's definitely a useful feature. In the database I work with, about 99%
of the character columns are defined through some user-defined data type.

Take an example: say in your database there is a code, which appears in
many tables, and there are plenty of parameters and variables in stored
procedures etc. Say that this code is five characters long, and you now find
that you need to make it longer, say eight chars. If you use a UDT, there
is one file you need to check out and change. If you have created all the
columns as varchar(5) declared all parameters and columns etc as, you
have tons of work to find all references, and you can never be really sure
that you missed one references somewhere. (To make it really worse, assume
that you have several other varchar(5) entities that should not be changed.)
Even with a UDT you have a lot of work to compose a script to implement the
change in the tables. But this is a script that you can run and test, and
you can be sure that once it passes the tests it's OK. If you have a
plain varchar(5) you may find that the response to your customer is
"sorry, we cannot implement this change safely".

Second, rules and defaults are indeed deprecated by Microsoft, but that's
sillyness from their side. Say that you have a type column, which permits
the values A, B and C, and this type column appears in several tables.
If you define a rule and bind to the type, it's a very simple matter to
permit the value D as well. If you have constraints, you have a number of
objects in the database saying the same thing. Binding rules and defaults
to a user-defined data type is very powerful feature in SQL Server.
(Bind rules and defaults directly to columns is another matter; in this case
constraints are better.)
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #3
(da*********@hotmail.com) writes:
I have several stored procedures with parameters that are defined with
user defined data types. The time it takes to run the procedures can
take 10 - 50 seconds depending on the procedure.
If I change the parameter data types to the actual data type such as
varchar(10), etc., the stored procedure takes less that a second to
return records. The user defined types are mostly varchar, but some
others such as int. They are all input type parameters.
Any ideas on why the stored procedure would run much faster if not
using user defined types?


What happened if you change back?

SQL Server has a feature known as parameter sniffing. This means that
the first time a procedure is run, the optimizer takes the input values
as guidance for building the plan. This has the effect if the first call
is with atypical values, this can lead to a poor plan sticks in the cache -
a plan which is poor for the more typical values, that is. Thus, according
to this theory, the change in data types has no importance, but the
mere fact that you flushed the query plan from the procedure cache had
importance.

There some other possibilities on the same theme if there are indexed views
or indexed computed columns involved. In such case it could be the case
that the procedure was originally created with QUOTED_IDENTIFIER and/or
ANSI_NULLS off. These settings are saved with hte procedure, and the two
must be on for indexes on views and computed columns to be used. If you
recreated the procedure from Query Analyzer it is likely that you had
these settings on. (As they are on by default from QA.)

Yet another possibility is that some of the parameters had an incorrect
type, and you replaced with a correct type. In this case implicit
conversions can lead to indexes not being used.
We use a lot of user-defined types in our database, and we have never
noticed any problems with them as such.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #4
I could have said UDTs <emphasis> as implemented in SQL Server 2000
</emphasis> exist for backwards compatibility reasons but the OP had
already stated he was using 2000.

I don't see that change control is any easier with a UDT than can be
achieved with a data dictionary and proper naming conventions. As far
as I can see UDTs make things much harder in the example you gave
because you have to unbind rules and defaults, alter the column type,
drop and recreate the UDT, alter the column type again and then rebind.
I can't say I've tried this heavily in production systems but it seems
like a lot more complexity and overhead than a single ALTER table
statement. I'm interested if your experience is difference.

Rules are deprecated by MS as you rightly say. This is a good thing for
two excellent reasons. Firstly, rules don't offer anything like the
functionality of CHECK constraints - specifically they can only
reference a single column at a time. Even assuming you find some
benefit in Rules you'll still need to use constraints as well so why
persist your business rules in two different and potentially
contradictory places? Secondly, the optimizer won't take advantage of
rules as it will the equivalent constraints. This can make an big
difference in some cases. Given these two advantages I don't see any
argument for rules at all.

Defaults? I don't know what they offer that default constraints don't.

Finally, an unquantifiable benefit. Most people I know follow the
"official" line and consider these as legacy features. If people are
using them less and less then I expect many of us will forget the
syntax and inticacies of bound rules and defaults (I know I do). That
means code we write today will be obscure to others in future and may
make them less productive (as an example, look at the number of people
who are still puzzled by the old *= syntax).

I still think this is good advice. Maybe I should have explained my
reasoning a bit better to start with so that the OP can make up his own
mind.

--
David Portas
SQL Server MVP
--

Jul 23 '05 #5
With testing, I've discovered further info.

If I declare variables in the stored procedure, and then set the
declared variables to the values passed as parameters, and use the
declared variables instead of the parameters in the where condition,
the results are returned almost instaneously.

It doesn't matter if I change the data types in the parameters from
user defined data types are not.

This does change the query plan. Both plans still use a clustered
index seek.
But in the faster plan, one bookmark lookup that was 1%, one index scan
that was 3%, and one filter that was 30% is no longer in the plan.
ALTER procedure spSelTMScheduleReportTest
@tmpOfficeId varchar(10),
@tmpTMId varchar(10),
@InputWeek varchar(10),
@tmpTeamId varchar(4)

AS
SET NOCOUNT ON
DECLARE @ErrCode INT
DECLARE @RC INT
Declare @WeekOf smalldatetime

declare @OfficeId udt_OfficeId,
@TMId udt_TeamMemberID,
@TeamId udt_TeamNumber

set @OfficeId =@tmpOfficeId
set @TMId = @tmpTMId
set @TeamId = @tmpTeamId
David Portas wrote:
I could have said UDTs <emphasis> as implemented in SQL Server 2000
</emphasis> exist for backwards compatibility reasons but the OP had
already stated he was using 2000.

I don't see that change control is any easier with a UDT than can be
achieved with a data dictionary and proper naming conventions. As far
as I can see UDTs make things much harder in the example you gave
because you have to unbind rules and defaults, alter the column type,
drop and recreate the UDT, alter the column type again and then rebind.
I can't say I've tried this heavily in production systems but it seems
like a lot more complexity and overhead than a single ALTER table
statement. I'm interested if your experience is difference.

Rules are deprecated by MS as you rightly say. This is a good thing for
two excellent reasons. Firstly, rules don't offer anything like the
functionality of CHECK constraints - specifically they can only
reference a single column at a time. Even assuming you find some
benefit in Rules you'll still need to use constraints as well so why
persist your business rules in two different and potentially
contradictory places? Secondly, the optimizer won't take advantage of
rules as it will the equivalent constraints. This can make an big
difference in some cases. Given these two advantages I don't see any
argument for rules at all.

Defaults? I don't know what they offer that default constraints don't.

Finally, an unquantifiable benefit. Most people I know follow the
"official" line and consider these as legacy features. If people are
using them less and less then I expect many of us will forget the
syntax and inticacies of bound rules and defaults (I know I do). That
means code we write today will be obscure to others in future and may
make them less productive (as an example, look at the number of people
who are still puzzled by the old *= syntax).

I still think this is good advice. Maybe I should have explained my
reasoning a bit better to start with so that the OP can make up his own
mind.

--
David Portas
SQL Server MVP
--


Jul 23 '05 #6
what we really need is the CREATE DOMAIN from SQL-92. Same idea,
better syntax..

Jul 23 '05 #7
--CELKO-- (jc*******@earthlink.net) writes:
what we really need is the CREATE DOMAIN from SQL-92. Same idea,
better syntax..


Yupp. Will have to admit that I don't know exactly what a domain buys
me, but I expect it to be similar that the user-defined data types
in SQL Server, but with some extra bells an whistles.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #8
David Portas (RE****************************@acm.org) writes:
I could have said UDTs <emphasis> as implemented in SQL Server 2000
</emphasis> exist for backwards compatibility reasons but the OP had
already stated he was using 2000.
I'm sorry, but I can't make any sense of this at all. User-defined data
types is a first-class citizen in SQL 2000 as well in SQL 2005. Any
talk about user-defined type existing only for backwards compatibility
is incorrect.

Note also the terminology: in SQL 2005 "user-defined type" is a
data type implemented in a CLR Assembly. The types that are in
SQL 2000 are called "user-defined data types", UDDT or "alias data
types" in a confusing mix. (Documentation has "alias", tools appears
still go with UDDT.)
I don't see that change control is any easier with a UDT than can be
achieved with a data dictionary and proper naming conventions.
A type is something you can trust. Naming conventions is not. We have 3700
stored procedures, developed over a span of more than ten years. Should we
trawl them for naming conventions? That's not a serious suggestion.
As far as I can see UDTs make things much harder in the example you gave
because you have to unbind rules and defaults, alter the column type,
drop and recreate the UDT, alter the column type again and then rebind.
I can't say I've tried this heavily in production systems but it seems
like a lot more complexity and overhead than a single ALTER table
statement. I'm interested if your experience is difference.
Well, this is how one of our type-definition files looks like:

EXEC sp_addtype 'aba_upduser', 'varchar(30)'
go
CREATE DEFAULT aba_upduser_def AS system_user
go
EXEC sp_bindefault 'aba_upduser_def', 'aba_upduser'
go
CREATE RULE aba_upduser_rule AS @x = system_user
go
EXEC sp_bindrule 'aba_upduser_rule', 'aba_upduser'
go

When we run this file, our load tool will automatically generate the
matching sp_drop and sp_unbind calls. Although, adding them manually is
no major task.

If we would need to change the type to nvarchar(128) that's a one-line
code change. Granted there are a couple of hundred tables to rebuild -
but at least it's a simple query to the system tables to find them.
(Or look up the cross-ref in DB-doc page for the type.) Running the
update script is another story.

Another example, say that we decide in 2005, that we want to use
ORIGINAL_LOGIN() rather than SYSTEM_USER. For us that a single file
to change and run. One single file.

With CHECK and DEFAULT constraints we would have several hundred tables
to change. Yes, we have a good tool for doing that. But we would have
to tailor our update scripts which are built on a principle that a
table change usually requires a reload.
Rules are deprecated by MS as you rightly say. This is a good thing for
two excellent reasons. Firstly, rules don't offer anything like the
functionality of CHECK constraints - specifically they can only
reference a single column at a time.
I never said that you should only use rules. Rules are good when you
have the same behaviour in many columns all over the database. Constraints
are good when you have a behaviour which is peculiar to one table.
Since rules applies to a domain, the fact that they cannot interact
with other columns is a moot point. (Yes, occasionally there may be a
pair of columns that appear in many tables and they have some common
rules between them. But that is not common.)

Rules and constraints are not contradictory features, but they supplement
each other.
Even assuming you find some benefit in Rules you'll still need to use
constraints as well so why persist your business rules in two different
and potentially contradictory places?
Huh? I'm sorry, but you are not making any sense. You are advocating
that the same business rule should be duplicated in many instances over
the database - and with the possibility that the copies are different.
Secondly, the optimizer won't take advantage of rules as it will the
equivalent constraints. This can make an big difference in some cases.
This is a correct observation. However, this presumes that the constraints
have never been enabled WITH NOCHECK, which unfortunately is the default
when you re-enable a constraint.
Given these two advantages I don't see any argument for rules at all.

Defaults? I don't know what they offer that default constraints don't.
Again: they reduce the number of places you have to change, and reduce
the number of duplicate information.
Finally, an unquantifiable benefit. Most people I know follow the
"official" line and consider these as legacy features. If people are
using them less and less then I expect many of us will forget the
syntax and inticacies of bound rules and defaults (I know I do). That
means code we write today will be obscure to others in future and may
make them less productive (as an example, look at the number of people
who are still puzzled by the old *= syntax).

I still think this is good advice. Maybe I should have explained my
reasoning a bit better to start with so that the OP can make up his own
mind.


You may not know and understand user-defined data types and bound rules
and bound defaults. But it's a huge leap to take it from there to tell
other people not to use it. I don't know DTS or Analysis Services, but
I would never dream of telling people not to use them.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #9
(da*********@hotmail.com) writes:
With testing, I've discovered further info.

If I declare variables in the stored procedure, and then set the
declared variables to the values passed as parameters, and use the
declared variables instead of the parameters in the where condition,
the results are returned almost instaneously.

It doesn't matter if I change the data types in the parameters from
user defined data types are not.

This does change the query plan. Both plans still use a clustered
index seek.
But in the faster plan, one bookmark lookup that was 1%, one index scan
that was 3%, and one filter that was 30% is no longer in the plan.


There is still too little information for me to get an understanding
what is really going on. Seeing the full procedure, type definitions
as well as the table and index definitions could give more clues,
although suspect hands-on would be required to get a real grip over
that situation. If even that is possible - it does sound mysterious.

Now, what you say that the plan change if you copy parameters to
local variables, is not completely unknown. We found in several cases
that this was a good idea, where we had code like:

CREATE PROCEDURE some_sp @datepar aba_date AS
...
IF @datepar IS NULL
SELECT @datepar = @today
...
WHERE datecol = @datepar

The standard call would be with @datepar as NULL, and the optimizer would
build the plan for NULL, and then the plan would be bad for the real
value.

When a query includes a condition to a local variable, the optimizer does
not know the value, so it will use a standard assumption, which is likely
to be better than a guess derived from a bogus value.

If statistics are inaccurate or out of date, this can happen with real
values as well, and sometimes the standard assumption can be better in
this case.

However, I can't place user-defined data types into the picture. A UDDT
comes with a nullability setting, but I cannot see how that could have
any impact.

Anyway, run an UPDATE STATISTICS tbl WITH FULLSCAN on the involved
tables, and then run sp_recompile on the procedure, and then try the
various combinations of declarations of parameters and local variables.

Not that I am sure that it will improve things, and even if it does,
we may still not understand what happened.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #10
> Granted there are a couple of hundred tables to rebuild -
but at least it's a simple query to the system tables to find them.
(Or look up the cross-ref in DB-doc page for the type.) Running the
update script is another story.


That was my main point about the incovenience of UDDTs - the cost of
updating them seems to outweigh the modest benefits for change management. I
can appreciate that in some environments different considerations might
apply.

You are right that UDDTs aren't going away right now but as for Rules and
Defaults, I would be reluctant to use them in a solution for a client when
Microsoft has experessly designated them a legacy feature and provided a
recommended alternative. I guess that's my consulting bias for "best
practice" over "because it works". On the other hand you may already have
something that works. That's why my reply to the OP was framed as a question
as well as a suggestion.

--
David Portas
SQL Server MVP
--
Jul 23 '05 #11
David Portas (RE****************************@acm.org) writes:
Granted there are a couple of hundred tables to rebuild -
but at least it's a simple query to the system tables to find them.
(Or look up the cross-ref in DB-doc page for the type.) Running the
update script is another story.
That was my main point about the incovenience of UDDTs - the cost of
updating them seems to outweigh the modest benefits for change
management. I can appreciate that in some environments different
considerations might apply.


The cost of changing 400 tables has little do with a UDDT. You will have a
fun case of changing 400 tables with or without one. But at least with
a UDDT you don't have to touch the source code for the tables.

And the benefits for change management are not modest - they are
significant.

There is also another issue. We have some 50-100 data types for codes
of various lengths in our database. Without UDDTs, programmers would
have to remember the whether an xyzcode was 5, 8, 10 or whatever
characters, and they would go wrong more than once. With a UDDT, they
know that an xyzcode is of the type aba_xyzcode, and don't even have
to bother about the length in most situations.
You are right that UDDTs aren't going away right now but as for Rules and
Defaults, I would be reluctant to use them in a solution for a client when
Microsoft has experessly designated them a legacy feature and provided a
recommended alternative. I guess that's my consulting bias for "best
practice" over "because it works". On the other hand you may already have
something that works.
It's not only "because it works", but because it adheres to basic
principles of software engineering. Of course, as a consultant you
should adapt to the habits to the shop you work for, unless they
explicitly have asked you to give suggestions for their local standards.

My perspective is of maintaining and constantly developing a large
application - and I've made those sweeping type changes more than once.
While breathtaking, they were doable and clearly manageable.
That's why my reply to the OP was framed as a question
as well as a suggestion.


DW did not even discuss rules and defaults. That was something you
brought in.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #12
Thanks for everyone's input.
At this point I've decided to use declared variables in the procedures
that are most affected. After several days of trying to find a
solution where I could continue to use udt's, I don't have anymore time
to devote to it.
The biggest disappointment is not being able to use the user defined
data types in all queries.

Thanks,
DW

Erland Sommarskog wrote:
David Portas (RE****************************@acm.org) writes:
Granted there are a couple of hundred tables to rebuild -
but at least it's a simple query to the system tables to find them.
(Or look up the cross-ref in DB-doc page for the type.) Running the
update script is another story.


That was my main point about the incovenience of UDDTs - the cost of
updating them seems to outweigh the modest benefits for change
management. I can appreciate that in some environments different
considerations might apply.


The cost of changing 400 tables has little do with a UDDT. You will have a
fun case of changing 400 tables with or without one. But at least with
a UDDT you don't have to touch the source code for the tables.

And the benefits for change management are not modest - they are
significant.

There is also another issue. We have some 50-100 data types for codes
of various lengths in our database. Without UDDTs, programmers would
have to remember the whether an xyzcode was 5, 8, 10 or whatever
characters, and they would go wrong more than once. With a UDDT, they
know that an xyzcode is of the type aba_xyzcode, and don't even have
to bother about the length in most situations.
You are right that UDDTs aren't going away right now but as for Rules and
Defaults, I would be reluctant to use them in a solution for a client when
Microsoft has experessly designated them a legacy feature and provided a
recommended alternative. I guess that's my consulting bias for "best
practice" over "because it works". On the other hand you may already have
something that works.


It's not only "because it works", but because it adheres to basic
principles of software engineering. Of course, as a consultant you
should adapt to the habits to the shop you work for, unless they
explicitly have asked you to give suggestions for their local standards.

My perspective is of maintaining and constantly developing a large
application - and I've made those sweeping type changes more than once.
While breathtaking, they were doable and clearly manageable.
That's why my reply to the OP was framed as a question
as well as a suggestion.


DW did not even discuss rules and defaults. That was something you
brought in.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp


Jul 23 '05 #13
(da*********@hotmail.com) writes:
Thanks for everyone's input.
At this point I've decided to use declared variables in the procedures
that are most affected. After several days of trying to find a
solution where I could continue to use udt's, I don't have anymore time
to devote to it.
The biggest disappointment is not being able to use the user defined
data types in all queries.


Pity. And it's really strange, because as I said, we are using UDDT
heavily in our applications, and we have noticed any problems. Or we
have not been looking...

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #14

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: sedefo | last post by:
I ran into this Microsoft Patterns & Practices Enterprise Library while i was researching how i can write a database independent data access layer. In my company we already use Data Access...
2
by: serge | last post by:
My project is to automate testing of Stored Procedures of type SELECT (at least for now). I want to create a table where each stored procedure's input parameter values are entered and in another...
9
by: Bob | last post by:
I have a sql command with a SearchDate parameter of type date. When I execute its stored procedure from Query Analyzer (to SS2K) with a date out of the valid datetime range, EXEC @SearchDate =...
3
by: chreo | last post by:
I have user-defined function in MSSQL which returns Table (with 10 columns) (sorry for Polish names) CREATE FUNCTION PACZKI_Z_AKCJI (@AKCJA_ID int) RETURNS TABLE RETURN SELECT TOP 100...
11
by: c676228 | last post by:
Hi everyone, I am just wodering in asp program, if there is anybody writing store procedure for inserting data into database since there are so many parameters need to be passed into store...
1
by: jobs | last post by:
I have a growning list of stored procedures that accept a single string as a parameter. The list and parameters are maintained in a table with two columns. Some of the Stored procedures take...
3
by: =?Utf-8?B?Um9nZXIgTWFydGlu?= | last post by:
When using the Web Site Administration Tool, I get the following error when trying to delete a user in a web application I configured to use membership, roles, and profiles: "An error was...
1
by: sheenaa | last post by:
Hello Members, I m creating my application forms in ASP.Net 2005 C# using the backend SQL Server 2005. What i have used on forms :: ? On my first form i have used some...
0
by: prasadbodas2000 | last post by:
Hi, I need a sample code in ASP or VB wherein ADO command object is used to invoke oracle stored procedure. The stored procedure (SP) should have oracle specific custom data types (CDT) as...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.