- Kevin Feasel
- Mala Mahadevan
- Mike Chrestensen
Notes: Questions and Topics
Welcome to TriPASS’s newest board member, Mike Chrestensen. Mike is taking over for Tom Norman, who has moved down to Texas. Mike will serve out the remainder of Tom’s term and will be up for election at the end of 2021.
Tips for Exam Study
Denise e-mailed us with a question:
My question is about the 70-761 Microsoft Exam and Azure-900 Exam (I do have books),do you know anybody in our group that passed that and would provide tips for further study & preparation for the Exam. Recommendations for testing tools?
Mike and I both recommended Itzik Ben-Gan’s exam prep book for 761 (and its prior iteration, 461). Frankly, even if you never intend to take the exam, this is a book worth owning because Itzik does a great job of teaching and includes a surprising amount of depth in an exam study book.
As far as AZ-900 goes, this is an introductory exam. My recommendation for that one is to check out the exam skills outline and also try the Microsoft Learn items linked to the exam. You’re being tested on concepts, basic services, and higher-level generalities more than deep knowledge on any given service, and so I wouldn’t stress too hard on this one.
Redgate Acquires PASS Assets
We also covered Redgate’s acquisition of PASS assets. At this point, there isn’t too much information available, but we do know a couple of things:
- Redgate does not get access to any user or attendee data that PASS previously owned. If you want to keep in the loop, you’ll have to sign up anew.
- Redgate promises a PASS Summit-like conference, as well as the return of SQL Saturdays. As of right now, the SQL Saturday URL doesn’t direct to a server, but I imagine they’re working on building out a new site. PASS did not give them any code as part of the acquisition, so they’re building anew. Which, frankly, is a good idea when you consider what the SQL Saturday system ran on…
- Redgate did obtain recordings from various PASS virtual groups and events, and they are working to bring that back online.
If you want to share your thoughts on what the future of the SQL Server community should look like, there’s a forum on SQL Server Central for this.
Mala and I shared our guarded optimism on this, though it will be interesting seeing if Redgate is able to get a consortium of vendors interested in sponsoring a Redgate-led conference.
Removing Features from SQL Server
Our key time sink for the evening was a Brent Ozar blog post which Mala and I both marked on our show agenda. The gist is that Brent tweeted out a question: if you could get rid of one feature in SQL Server, what would it be? For the show, we looked at each of the features that people submitted and gave our takes. Forthwith:
- Cursors. Nope, keep them. Cursors can solve certain problems that you can’t do set-based. Prior to SQL Server 2012, cursors were the best solution for running totals (assuming you didn’t want to bank on accidental behavior around clustered index sorting). I’ve also used cursors to solve problems which don’t have pleasant set-based solutions. Then there’s the value in using cursors for administrative work. Yeah, you can say to use Powershell or .NET or bash or some other language to do the work, but if I’m in T-SQL and don’t want to build out a bunch of extra code in other languages, this is a good route. Also, for all of the people who said cursors, nobody wanted to get rid of WHILE. This is despite WHILE typically being much slower than properly-tuned cursors. I get that there are people who misuse cursors, and I get that set-based solutions are generally much better. But that doesn’t mean it makes sense to drop the functionality.
- MERGE. This is one where I’d rather they fix it than dump it. There are too many bugs around MERGE for me to recommend people actually use, but it’d be so nice if it did work as expected.
- Triggers. Again, no. Those are useful for good reasons. They can be misused, but there’s functionality that I can’t easily replicate without triggers. Or I can, but then other people want to get rid of them as well…
- VARBINARY(MAX) or IMAGE data types. No to the first, yes to the second. The IMAGE data type is already deprecated; don’t use it anymore. As far as VARBINARY(MAX) goes, it’s quite useful. I can store compressed data, store ML models, and store things which I need access to in SQL Server without having to go elsewhere. Yeah, I get the pain of storing website images in a database—don’t do that. But that doesn’t mean getting rid of the data type is good idea.
- GEOGRAPHY and GEOMETRY data types. Those are quite useful for spatial data, and if I had a dollar for every time somebody put longitude in the latitude column or vice versa, I’d have several dollars. Also, what are you storing longitude and latitude as? Float? Decimal? VARCHAR? It’s also a lot more pleasant to perform geographical distance calculations with these data types than it is if you’re just storing floats.
- UNION and DISTINCT. This was a person who’s been burned by slower-than-necessary code. Otherwise, these constructs are so useful that it’s a non-starter.
- WITH(NOLOCK) and instead force the use of the term READ UNCOMMITTED. I can get behind this, though I shudder at the hundreds of millions of lines of code which would break as soon as that happened…including, I’m sure, Microsoft code.
- Auto-shrink. Sign me up for this one. I don’t think there’s a single good use for it, and would be happy to see it disappear entirely from the product. If you need to shrink a database, that should be an uncommon occurrence. Uncommon enough that you shouldn’t need to automate it.
- Auto-close. I do have a good use for auto-close: development instances with large numbers of infrequently-accessed databases. Yeah, that’s a pretty specific use case, but if you’re in a low-memory situation or have a huge number of databases which rarely get used (and don’t mind the spinup cost when you do use them), auto-close is a viable solution.
- Lightweight pooling. My recollection is that this was useful a couple decades ago, but has been a bad idea since, oh, 2005 or so.
- Priority boost. This was always a bad idea. I’m okay with canning it.
- OLE Automation. I haven’t worked with this, to be honest, so I don’t have a strong opinion.
- CLR. A lot of DBAs dislike CLR, but I think it’s reflexive, like a vampire seeing sunlight. Or a developer seeing sunlight. If you want to argue that CLR has a fairly niche role, okay. As long as you ignore all of the stuff in T-SQL which is really CLR (PARSE, FORMAT, STRING_SPLIT, STRING_AGG, etc.). But I’ve written and used CLR modules to great effect, as it lets me do things in-database, simplifying solutions.
- Service Broker. My beef with Service Broker is that it never got the UI it deserved. It’s a solid queueing system which works quite well, but it takes a while to understand.
- Linked servers. The person mentioned that any work which requires linked servers should be done “in the mid-tier.” Sometimes I don’t have a mid-tier. Sometimes I don’t want a mid-tier. Occasionally, I just want to read the contents of a text file, and linked servers are an option (or OPENROWSET or PolyBase, both of which I’m also counting in this). Linked servers are useful for ETL, reference data lookups, migrating data from other data platforms, and plenty more. Yes, they can be slow and I don’t recommend using them frequently in OLTP scenarios. But SQL Server handles more than OLTP.
- Database Tuning Advisor. This is an interesting one to think about. The kind of shop which uses DTA typically doesn’t know better and often ends up in a hole. But without DTA, they’d be in a different and possibly more painful hole. I don’t like DTA at all and hate having to clean up after it so much, but when you don’t have a DBA and “the database is slow,” I see the allure.
- Full-text indexing. Meh. I’ve seen in in action a couple of times, seen it fail to work a couple of times (usually because data sets are too large), and have never been too thrilled with it. But it’s definitely better than
LIKE '%Something%'searches all over your code.
- XML indexing. Again, meh. I don’t think I’ve ever seen it in production, but it does work. I guess.
- FILESTREAM / FileTable. I like FileTable. I think it does a good job of what it’s supposed to do: showing metadata for files in a share. You can’t complain about storing files in the database, but you also don’t need some custom-written app to keep a table in a database up to date with a file share. It’s a niche set of functionality, but again, I’ve benefitted from having this.
- SSAS / SSRS / SSIS, and change the pricing model. Strongly against. SQL Server has always had an AND pricing model: you get this AND this AND this AND this, all for the same price. By contrast, Oracle has an OR pricing model. If you want feature A, you pay $$. For feature B, you pay an additional $$. For feature C, you pay an additional $$$. Pretty soon, $$$$$$$ adds up to a bunch of dollars. The biggest problem is that splitting out the products wouldn’t really decrease the price; it’d likely kill the other products as cross-subsidies would no longer apply. Today, I pay for SQL Server and get SSAS for free, so I can try it out. Maybe it turns out to be great and I use it so much that I add it to another server (and pay that additional license fee). Or maybe I don’t like it. If business decision-makers had to buy SSAS separately, there’d be a lot less of it and those of us who do use it would be worse off as a result.
Summing it up, I agree with some of these, particularly auto-shrink, priority boost, the IMAGE data type, and lightweight pooling. I can be swayed to a couple more. But most of these are useful tools whose detractors see being misused. It’s not the trigger’s fault that some developer created cross-server, multi-step triggers which call multi-statement UDFs and we update it by hitting the ASMX web service hosted on our SQL Server.