Sql server 2008 dba tutorial pdf

Date published 

 

SQL Server DBA Training. Andrew Fraser, September , http:// aracer.mobi This course is a reduced version of Microsoft‟s 5 day System. as a SQL Server DBA, but you are familiar with SQL. Server basics. . Requires SQL Server /, and Windows Server / (or higher version). A SQL Server /R2 DBA training course from aracer.mobi · Skill level: ; · Download a PDF of the videos in this course Download PDF video.

Author:DANIAL DILLINER
Language:English, Spanish, Indonesian
Country:Greece
Genre:Politics & Laws
Pages:655
Published (Last):06.09.2016
ISBN:498-1-30481-651-2
Distribution:Free* [*Registration needed]
Uploaded by: IVONNE

69302 downloads 88877 Views 14.67MB PDF Size Report


Sql Server 2008 Dba Tutorial Pdf

3. aracer.mobi Get Certified as a SQL Server DBA: 1. a. Microsoft Certified Technology Specialist. becoming a DBA, specializing in Microsoft SQL Server. One of the deciding .. SQL Servers instances from SQL Server to SQL Server , by the end of may prefer paper, an e-mail, a Word file, a PDF file, HTML, or they may have a. Microsoft. This product is built for the basic function of storing retrieving data as This tutorial explains some basic and advanced concepts of SQL Server such.

It is no longer unheard of to have terabyte databases running on a SQL Server. SQL Server administration used to just be the job of a database administrator DBA , but as SQL Server proliferates throughout smaller companies, many developers have begun to act as administrators as well. Additionally, some of the new features in SQL Server are more developer-centric, and poor configuration of these features can result in poor performance. SQL Server now enables you to manage the policies on hundreds of SQL Servers in your environment as if you were managing a single instance. Administrators or DBAs support the production servers and often inherit the database from the developer. This book is intended for developers, DBAs, and casual users who hope to administer or may already be administering a SQL Server system and its business intelligence features, such as Integration Services. This book is a professional book, meaning the authors assume that you know the basics about how to query a SQL Server and have some rudimentary concepts of SQL Server already. For example, this book does not show you how to create a database or walk you through the installation of SQL Server using the wizard.

Chapters 14 and 15 discuss how to optimize the T-SQL that accesses your tables and then how to index your tables appropriately. Chapters 16 through 20 consist of the high-availability chapters of the book. Chapter 16 covers how to use the various forms of replication, while database mirroring is covered in Chapter Classic issues and best practices with backing up and recovering your database are discussed in Chapter Chapter 19 dives deeply into the role of log shipping in your high-availability strategy, and Chapter 20 presents a step-by-step guide to clustering your SQL Server and Windows server.

In short, the new version of SQL Server focuses on improving your efficiency, the scale of your server, and the performance of your environment, so you can do more in much less time, and with fewer resources and people. To follow the examples in this book, you will need to have SQL Server installed. If you wish to learn how to administer the business intelligence features, you need to have Analysis Services and the Integration Services components installed.

If you do not have this edition, you will still be able to follow through some of the examples in the chapter with Standard Edition. The blog of Andy Wei Roll up our sleeves to work harder. Last section, we looked at the 8KB pages in our database. Those pages are the same whether they're on disk or in memory - they include the database ID and the object ID, so if we looked at all of the pages in memory, we could fgure out which tables are being cached in memory right now.

The below query gives us the magic answers, but be aware that the more memory you have, the longer this will take. That might be completely okay - if you only regularly query just that amount of data - but what if we constantly need all GB, and we're constantly pulling it from disk? I get so excited by these concepts. How fast are the cached pages changing?

From the moment we read an 8KB page of disk, how long does it stay in memory before we have to fush it out of the cache to make room for something else we're reading of disk? The longer, the beter, as I explain in my Perfmon tutorial.

Do the results change based on time of day? This is a one-time snapshot of what's in memory at the moment, but it can change in You might be surprised at how litle memory is used for caching data. If you have automated processes that run a bunch of reports in a single database at 2AM, then the memory picture will look completely diferent then. Are we caching low-value data?

If you mix vendor apps and in-house-writen apps on the server, you'll ofen fnd that the worst- writen application will use the most memory.

Thing is, that might not be the most important application. Unfortunately, we don't have a way of capping how much memory gets used by each database. This is why most shops prefer to run vendor applications on separate virtual machines or servers - this way, they don't hog all the memory on a SQL Server that needs to serve other applications.

Do we have enough memory? It's the safest, easiest performance tuning change you can make. If you're in a VM or running Enterprise Edition, the memory question gets a lot tougher. Are we using memory for anything other than SQL Server? If we've got Integration Services, Analysis Services, Reporting Services, or any other applications installed on our server, these are robbing us of precious memory that we might need to cache data.

Put your management tools on a virtual machine in the data center, and remote desktop into that instead. Can we reduce memory needs with indexes? If we've got a really wide table lots of felds or a really wide index, and we're not actively querying most of those felds, then we're caching a whole bunch of data we don't need.

Remember, SQL Server is caching at the page level, not at the feld level. The less felds, the more data we can pack in per page. The more we can pack in, the more data we're caching, and the less we need to hit disk. When I tune indexes on a server I've never seen before, sys. The database with the most stuf cached here is likely to be the one that needs the most index help.

I kept complaining to my SAN administrators because my storage didn't respond fast enough - my drives were taking 50ms, ms, or even ms in order to deliver data for my queries. The SAN admin kept saying, "It's okay. The SAN has a cache. That's way less than what the SQL Server has in memory. Not gonna happen. We need to track: We will never have two records in here with the same phone number.

We have to tell our database about that by making the phone number our primary key. When we make the phone number the primary key, we're telling SQL Server that there can be no duplicate phone numbers. That means every time a record is inserted or updated in this table, SQL Server has to check to make sure nobody exists with that same phone number. As of the year , there were about , people in Miami. Throw in businesses, and let's say our table has , records in it.

That means by default, every time we insert one eensy litle record, SQL Server has to read half a million records just to make sure nobody else has the same phone number! Well, that won't work, will it? So let's organize our table in the order of phone number.

That way, when SQL Server inserts or updates records, it can quickly jump to the exact area of that phone number and determine whether or not there's any existing duplicates. It's called clustered because - well, I have no idea why it's called clustered, but the botom line is that if you could look at the actual hard drive the data was stored on, it would be stored in order of phone number.

Now we have the table organized by phone number, and if we want to fnd people by phone number, it'll be very fast. While our computer systems will usually need to grab people's data by phone number, our customers and end users ofen need to get numbers by other ways.

That's where non- clustered indexes come in. In which we pretend the phone company, but instead of giving everybody unlimited calling, we just organize the data. What Goes First? A Non-Clustered Index Our customers constantly need to fnd people's phone numbers by their name. They don't know the phone number, but they know the last name and frst name. We would create an index called the White Pages: Think about how you use the white pages: You scan through pages looking at just the leters at the top until you get close 2.

When you get close, you open up the full book and jump to the right leters 3. You can quickly fnd the right single one record Now think about how you would do it without the White Pages. Think if you only had a book with , records in it, organized by phone number. You would have to scan through all , records and check the last name and frst name felds. The database works the same way, except it's even worse! If a developer wrote a SQL query looking for the phone number, it would look like this: When you, as a human being, go through that list of , phone numbers, you would stop when you thought you found the right John Smith.

SQL Server DBA Tips and Tricks

The database server can't do that - if it fnds John Smith at row 15, it doesn't mater, because there might be a few John Smiths. Whenever you do a table scan and you don't specify how many records you need, it absolutely, positively has to scan all , records no mater what.

If the database has an index by last name and frst name, though, the database server can quickly jump to Smith, John and start reading. The instant it hits Smith, Johnathan, it knows it can stop, because there's no more John Smiths. Covering Fields: Helping Indexes Help You But that's not always enough.

Sometimes we have more than one John Smith, and the customer needs to know which John Smith to call. Afer all, if your name was John Smith, and the phone book didn't include your address, you'd get prety tired of answering the phone and saying, "No, you want the John Smith on Red Road. He's No, but we include it for convenience because when we DO need it, we need it bad.

And if we DON'T need it, it doesn't really hurt us much. This is called a covering index because it covers other felds that are useful. Adding the address feld to our index does make it larger.

A phone book without addresses would be a litle thinner, and we could pack more on a page. We probably don't want to include the Address 2 feld, because the Address 1 feld is enough to get what we need. When building covering indexes, the covering felds go at the end of the index. Obviously, this index would suck: That wouldn't be as fast and easy to use.

That's why the covering felds go at the end, and the names go frst - because we use those. This is more efcient than organizing the phone book by frst name then last name because there are more unique last names than frst names.

There are probably more Brents in Miami than Ozars. This is called selectivity. The last name feld is more selective than the frst name feld because it has more unique values.

For lookup tables - meaning, when users need to look up a specifc record - when you've narrowed down the list of felds that you're going to use in an index, generally you put the most selective feld frst.

Indexes should almost never be set up with a non-selective feld frst, like Gender. Not that that's a bad thing - but no mater how much of a suave guy you think you are, you don't really need ALL of the women in Miami. This is why non- selective indexes aren't all that useful on lookup tables. This rule is really important for lookup tables, but what if you aren't trying to look up a single specifc record?

What if you're interested in a range of records? Well, let's look at I prefer ordering trucks of food. Another Index When we need to fnd a dog groomer, we don't want to go shufing through the white pages looking for anything that sounds like a dog groomer. We want a list of organized by business category: We'll work with several of the records.

Here, we're searching for a range of records, not just a single one. Notice that we didn't put the most selective feld frst in the index. The feld "Business Name" is more selective than "Business Category".

Oracle Database Administration for Microsoft SQL Server

But we put Business Category frst because we need to work with a range of records. When you're building indexes, you not only need to know what felds are important, but you have to know how the user is fetching records.

If they need several records in a row next to each other, then it may be more helpful to arrange the records like that by carefully choosing the order of the felds in the index. Learning More About Indexes Indexes are really important, so we'll be covering these in more depth in the next two emails. In the meantime, here's a few great resources on geting started with indexes: Our index resources page - where we've got posts and videos about heaps, indexing for deletes, partitioning, and more.

Expert Performance Indexing by Jason Strate and Ted Krueger book - covers how indexes work and how to pick the right one. Also available on Kindle. In this week's episode, we're going to spend just a paragraph or two covering the other types of indexes, starting with covering indexes.

Get it? We're covering covering indexes. Oh, I kill me. Covering Indexes Covering indexes aren't actually a diferent kind of index - it's a term that is used in combination with a query and an index. If I have this query: SQL Server doesn't have to go back to the clustered index in order to get the results I need. This means faster queries, plus less contention - it leaves the clustered index and the other nonclustered indexes free for other queries to use.

Covering indexes are most efective when you have very frequent queries that constantly read data, and they're causing blocking problems or heavy IO. Filtered Indexes Say we're a big huge online store named afer a river, and we constantly add records to our dbo. Orders table as customers place orders. We need to query orders that haven't been processed yet, like this: Full Text Indexes Let's say we have a table called dbo. MoviePlots, and it had a Description feld where we put each movie's plots.

We know we liked this one movie where a guy was afraid of snakes, but we couldn't remember the exact table. We could write a query that says: SQL Server would have to look at every movie's description, and scroll through all of the words one character at a time looking for snakes. Even if we index the Description feld, we're still going to have to scan every row.

Full text indexes break up a text feld like Description into each word, and then stores the list of words in a separate index.

They're blazing fast if you need to look for specifc words - but only as long as you rewrite your query to use the full text search commands like this: To learn more about full text indexing, check out: That's CPU-intensive, and a recipe for slow performance.

Instead, we can create pre- processed versions of the XML so we can rapidly jump to specifc nodes or values. Heaps Heaps are tables with no clustered index whatsoever. They're tables stored in random order, data slapped in any old place that fts. When you want to query a heap, SQL Server scans the whole freakin' thing.

Sounds bad, right? Most of the time, it is - except for a couple of very niche uses. If you have a log-only table, meaning theres inserts but never any updates, deletes, or selects, then a heap can be faster. If you have a Why did it have to be snakes?

Just make sure you test it to make sure it's actually faster under your application's needs. Unfortunately, they don't give you a holistic overall picture - they're just raw data that has to be manually combined and interpreted. We'll talk about that in coming lessons, but you need to have this ground knowledge of index options frst.

Even so, some people love index hints Read the full story: You just need to practice it regularly.

Top 4 Free Microsoft SQL Server Books - PDF Download or Online Read

Over time, data sizes change, user activity changes, and the SQL Server optimizer changes. Each of these things mean that indexes that are best for an application will alsochange. Because of this, you want to do a few changes every month. Data was updated frequently throughout the day and index tuning was a serious challenge. At the best of times, performance was dicey.

Things went bad Application performance plummeted.

Lots of code changes had been released recently, data was growing rapidly, and the hardware wasn't the absolute freshest. There was no single smoking gun-- there were 20 smoking guns! A team was formed of developers and IT staf to tackle the performance issue.

Early in the process they reviewed maintenance on the database server. Someone asked about index fragmentation.

This website is currently unavailable.

The DBA explained that fragmentation wasn't the problem. Bad, meet ugly The whole performance team fipped out. Trust disappeared. Managers squirmed. More managers were called in. The DBA tried to change the subject, but it was just too late.

More than a week was wasted over Fragmentation-Gate. It was a huge, embarrassing distraction, and it solved nothing. Here's the deal-- the DBA was actually right. Fragmentation wasn't the root cause of the performance problem. But he made a miscalculation: If you're not following a best practice, you need a good reason why. Regular index maintenance still has a lot of merit: It's still a good idea to automate index maintenance.

Don't go too crazy with it-- monitor the runtime and IO use and run it only at low volume times to make sure it helps more than it hurts.

Indexes are like cars. Before you implement index maintenance, fnd out how much time tables can be ofine in each of your databases. Even with SQL Server Enterprise Edition, you can specify an online rebuild unless the index contains large object types. This restriction is relaxed somewhat in SQL Server Partitioned tables are especially tricky. You can rebuild an entire partitioned index online, but partition level rebuilds are ofine until SQL Server Maintenance plans or custom scripts?

If you need to minimize downtime, custom index maintenance scripts are the way to go. Our favorite: Ola Hallengren's maintenance scripts. These are super fexible, well documented, and … free! The scripts have all sorts of cool options like time boxing and statistics maintenance. Download and confgure them on a test instance frst. There's a lot of options on parameters, and you'll need to play with them.

Get used to the 'cmdexec' job step types. When you install the scripts you'll see that the SQL Server Agent jobs run index maintenance using a call to sqlcmd. That's by design! Use the examples on the website. If you scroll to the botom of the index maintenance page you'll fnd all sorts of examples showing how to get the procedure to do diferent useful things. Find out when maintenance fails Don't forget to make sure that your maintenance jobs are successfully logging their progress.

Set up Database Mail and operators so jobs let you know if they fail. Tell your boss you did a good thing Finally, write up a quick summary of what you did, why you chose custom scripts or maintenance plans, and why.

Share it with your manager and explain that you've set up automated index maintenance as a proactive step. Having your manager know you're taking the time to follow best practices certainly won't hurt-- and one of these days, it just might help you out.

Stop Worrying About Fragmentation - defragging everything can cause more problems than it solves. How many production SQL Servers do you have? Did your backups take the normal amount of time last night? When was the last time DBCC successfully fnished in production? Security Questions: How many diferent people are sysadmins in production?

How to Survey Your Network for Servers Put a row in the spreadsheet for every server you have - whether you're in charge of it or not. We want to start with a good inventory of what we have, and there's two good free tools to do it. Microsof Assessment and Planning Toolkit - it's actually designed for licensing compliance, but it works great for building server inventories.

It scans your network looking for whatever programs you pick, but just confne it to SQL Servers only. If you're in a small shop where your account has admin privileges in the domain, you might fnd a lot more servers than you expected. We don't get paid for plugging these products, and we're always on the lookout for similar so if you know of a beter one, email it to us at Help BrentOzar.

I don't have to see the list - I understand if you have security concerns - but I just want to know if that list exists. This question serves two purposes: it tells YOU if the company has their act together when it comes to documentation, and it tells THEM that you're the right person to manage their database servers.

If they don't have the list, they're going to want that list right away. Now's your chance to explain how you would go about gathering that information armed with the info in this email.