Thursday, 15 November 2018

Interview Q and A for SQL Server Part - 3

191.  Can you briefly describe about EXTENT?
An extent is eight physically contiguous pages, or 64 KB means 16 Extents= 1 MB.
There are two types of Extents. Uniform Extents and Mixed Extents.
Uniform extents are owned by a single object;
Mixed extents are shared by up to eight objects

192.  What are the different Types of Pages available?
GAM and SGAM (Global Allocation Map & Shared GAM):
GAM: Extents have been allocated: 1 – Free space 0 – No space
SGAM: Mixed Extents have been allocated: 1 – Free Space + Mixed Extent and 0 – No space
Each GAM / SGAM covers 64000 extents – 4 GB
PFS (Page Free Space): Percentage of free space available in each page in an extent.
DCM (Differential Changed Map):  This tracks the extents that have changed since the last BACKUP DATABASE statement. 1 – Modified, 0 – Not modified
BCM (Bulk Changed Map): This tracks the extents that have been modified by bulk logged operations since the last BACKUP LOG statement. 1 – Modified, 0 – Not modified (Used only in bulk logged Recovery model)
In each data file pages are arranged like below
Along with that we have three different data pages
Data
Index
Text/ Image (LOB, ROW_OVERFLOE, XML)

193. Can you explain the network flow in SQL Server Architecture?
 These steps are defined as follows:
1. The user selects an option in a client application. This option calls a function in the client application that generates a query that is sent to SQL Server. The application uses a database access library to send the query in a way SQL Server can understand.
2. The database library transforms the original request into a sequence of one or more Transact-SQL statements to be sent to SQL Server. These statements are encapsulated in one or more Tabular Data Stream (TDS) packets and passed to the database network library to be transferred to the server computer.
3. The database network library uses the network library available in the client computer to repackage the TDS packets as network protocol packets.
4. The network protocol packets are sent to the server computer network library across the network, where they are unwrapped from their network protocol.
5. The extracted TDS packets are sent to Open Data Services (ODS), where the original query is extracted.
6. ODS sends the query to the relational engine, where the query is optimized and executed in collaboration with the storage engine.
7. The relational engine creates a result set with the final data and sends it to ODS.
8. ODS builds one or more TDS packets to be sent to the client application, and sends them to the server database network library.
9. The server database network library repackages the TDS packets as network protocol packets and sends them across the network to the client computer.
10. The client computer receives the network protocol packets and forwards them to the network libraries where the TDS packets are extracted.
11. The network library sends the TDS packets to the database access library, where these packets are reassembled and exposed as a client result set to the client application.
12. The client application displays information contained in the result sets to the user.

194. Can you describe how SQL Server handles Batch or Task Scheduling?
Each instance must handle potentially thousands of concurrent requests from users. Instances of SQL Server use Microsoft Windows threads, or if configured, they use fibers, to manage these concurrent tasks efficiently. This includes one or more threads for each server Net-Library, a network thread to handle network I/O, and a signal thread for communicating with the Service Control Manager.
Understanding Scheduling:  Each instance of SQL Server has an internal layer (SQL OS/Kernel) that implements an environment similar to an operating system. This internal layer is used for scheduling and synchronizing concurrent tasks without having to call the Windows kernel.
Connection: A connection is established when the user is successfully logged in. The user can then submit one or more Transact-SQL statements for execution. A connection is closed when the user explicitly logs out, or the connection is terminated.
Batch: An SQL batch is a set of one or more Transact-SQL statements sent from a client to an instance of SQL Server for execution.
Task: A task represents a unit of work that is scheduled by SQL Server. A batch can map to one or more tasks.
Windows thread: Each Windows thread represents an independent execution mechanism.
Fiber: A fiber is a lightweight thread that requires fewer resources than a Windows thread. One Windows thread can be mapped to many fibers.
Worker thread: The worker thread represents a logical thread (Task) in SQL Server that is internally mapped (1:1) to either a Windows thread or, if lightweight pooling is turned ON, to a fiber. The mapping can be done till the free worker threads available. (Parameter: Max worker Threads)
Thread and Fiber Execution: Microsoft Windows uses a numeric priority system that ranges from 1 through 31 to schedule threads for execution. Zero is reserved for operating system use. When several threads are waiting to execute, Windows dispatches the thread with the highest priority.
By default, each instance of SQL Server is a priority of 7, which is referred to as the normal priority. The priority boost configuration option can be used to increase the priority of the threads from an instance of SQL Server to 13. This is referred to as high priority.
The performance of any instances running at normal priority can be adversely affected. Also, the performance of other applications and components on the server can decline if priority boost is turned on.

195. What are the restore options available?
When you restore a backup, you can choose from 3 restore option.
  1. With Recovery – Database is ready to use, and user can connect to database, user can change data inside database.
  2. No Recovery – Database is not ready, there are few more backups that has to be applied to this database instance. User cannot connect to database because it is in Restoring Status. (Exception: Not considering Database Snapshots )
  3. Standby / Read Only – Database is ready to use but database is in Read Only mode, user can connect to database but they cannot change data inside database. A running database con not be changed to standby mode. Only a data in no-recovery state can be moved to standby mode. This is an option that is specified while restoring a database or transaction log.

 196. Is the native SQL Server 2005 backups are in clear text or in encrypted?
With SQL Server 2008 is the introduction of natively encrypted database backups. Prior to SQL Server 2008 a third party product was necessary to encrypt the database backups.

197. How can I verify that backups are occurring on a daily basis?
  • Check all backup jobs history
  • Review the SQL Server error log for backup related entries.
  • Query the msdb.dbo.backupset table for the backup related entries.
  • Review the file system where the backups are issued to validate they exist.

 198. How do you know if your database backups are restorable?
  • Issue the RESTORE VERIFYONLY command to validate the backup. For validating LiteSpeed backups use XP_restore_verifyonly
  • Randomly retrieve tapes from off site and work through the restore process with your team to validate the database is restored in a successful manner.

199. What are some common reasons why database restores fail?
  • Sufficient space not available on drive
  • User may not have sufficient permissions to perform the restore
  • Unable to gain exclusive use of the database.
  • LSN’s are out of sequence so the backups cannot be restored.
  • Syntax error such as with the WITH MOVE command.
  • Version problem
  • Might be wrong backup location specified
  • Service account may not have permissions on backup folder

200. What are the permissions required to perform backup and Restore?
The user must be a member of either of the below roles
Backup:
  • sysadmin – fixed server role
  • db_owner –  fixed database role
  • db_backupoperator – fixed database role
Restore:
  • Sysadmin – fixed server role
  • Dbcreator – fixed server role
  • db_owner – fixed database role

201.  How can you be notified if a native SQL Server database backup or restore fails via the native tools?
  • Setup SQL Server Alerts to be sent to Operators on a failure condition.
  • Include RAISERROR or TRY\CATCH logic in your backup or restore code to alert on the failure.

202. Does all successful SQL Server backup entries can be prevented from writing to the SQL Server Error Log by a single trace flag?
Yes – Just enable the trace flag 3226.

 203. What are some common post restore processes?
  • Sync the logins and users
  • Validate the data is accurate by running dbcc commands
  • Notify the team\user community
  • Cleanse the data to remove sensitive data i.e. SSN’s, credit card information, customer names, personal information, etc.
  • Change database properties i.e. recovery model, read-only, etc.

204. What is the database that has the backup and restores system tables?  What are the backup and restore system tables?  What do each of the tables do?
 The MSDB database is the database with the backup and restores system tables. Here are the backup and restore system tables and their purpose:
backupfile – contains one row for each data file or log file backed up
backupmediafamily – contains one row for each media family
backupmediaset – contains one row for each backup media set
backupset – contains one row for each backup set
restorefile – contains one row for each restored file
restorefilegroup – contains one row for each restored filegroup
restorehistory – contains one row for each restore operation

205. How can full backups be issued without interrupting the LSN’s?
Issue the BACKUP command with the COPY_ONLY option

206. How is a point in time recovery performed?
It depends on which backup types are issued.  In this example let’s assume that full, differential and transaction log backups are issued.
  • Restore the most recent full backup with the NORECOVERY clause
  • Restore the most recent differential backup with the NORECOVERY clause
  • Restore all of the subsequent transaction log backups with the NORECOVERY clause except the last transaction log backup
  • Restore the last transaction log backup with the RECOVERY clause and a STOPAT statement if the entire transaction log does not need to be applied

207. What are your recommendations to design a backup and recovery solution? Simply what is Backup Check list?
  • Determine What is Needed
  • Recovery Model
  • Select Backup Types
  • Backup Schedule
  • Backup Process
  • Document
  • Backup to Disk
  • Archive to Tape
  • Backup to Different Drives
  • Secure Backup Files
  • Encrypt or Password Protect Backup Files
  • Compress Backup Files
  • How Much to Keep on Disk
  • Online Backups
  • Run Restore Verify only
  • Offsite Storage

208. Consider a scenario where you issue a full backup.  Then issue some transaction log backups, next a differential backup, followed by more transaction log backups, then another differential and finally some transaction log backups.  If the SQL Server crashes and if all the differential backups are bad, when is the latest point in time you can successfully restore the database?  Can you recover the database to the current point in time without using any of the differential backups?
You can recover to the current point in time, as long as you have all the transaction log backups available and they are all valid. Differential backups do not affect the transaction log backup chain.

209. What are the three basic phases for database recovery and in what order do they occur?
What are the phases of sql server database restore process?​
  • Analysis
  • Redo – rolls forward committed transactions
  • Undo – rolls back any incomplete transactions
1. Copy Data: Copies all data,log and index pages from backup file to database mdf, ndf and ldf files
​ 2. REDO: Rollfoward all committed transactions to database and if it finds any uncommitted transactions it goes to the final phase UNDO.
3. UNDO: Rollback any uncommitted transactions and make database available to users.

210. What options/arguments can be specified in a BACKUP LOG statement to keep inactive log records from being truncated?
  • SQL Server 2000: NO_TRUNCATE
  • SQL Server 2005/2008: NO_TRUNCATE, COPY_ONLY

211. What are all of the backup \Restore options and their associated value?
Backup Options:
  • Full – Online operation to backup all objects and data in a single database
  • Differential – Backup all extents with data changes since the last full backup
  • Transaction log – Backup all transaction in the database transaction log since the last transaction log backup
  • File – Backup of a single file to be included with the backup when a full backup is not possible due to the overall database size
  • File group – Backup of a single file group to be included with the backup when a full backup is not possible due to the overall database size
  • Cold backup – Offline file system backup of the databases
  • Partial Backup – When we want to perform read-write filegroups and want to exclude read-only filegroups from backup. It will be useful for huge databases (Data warehousing)
  • Third party tools – A variety of third party tools are available to perform the operations above in addition to enterprise management, advanced features, etc.
Restore Options:
  • Restore an entire database from a full database backup (a complete restore).
  • Restore part of a database (a partial restore).
  • Restore specific files or filegroups to a database (a file restore).
  • Restore specific pages to a database (a page restore).
  • Restore a transaction log onto a database (a transaction log restore).
  • Revert a database to the point in time

212. How much time taken to take full backup of 500 GB database by using third party tool litesped and without using third-party tool and also how much time taken to restore same full backup using litespeed and without third-party tool
There is no specific time we can say for BACKUP & RESTORE operation.
It depends on lot of factors like Disk I/O, Network, processors etc.
 SQL Server 2005:
Database Size: 1.2 TB
Time taken to Backup with Litespeed :  3:20 Hrs (80 % of compression)
Time Taken to Restore: 6Hrs
 Database Size: 800 GB
Time Taken to Backup using Native Method:  11 Hrs
I never tried restoring huge db’s in native method mean native backups
SQL Server 2000:
A Database of 20 GB will take 14 Min to Backup and 22 Min to Restore the Backup

213. What are the issues you faced in backup and restore process?
Common Errors in Backup:
Error 3201 – when performing a backup to a network share
Sol:
Where SQL Server disk access is concerned, everything depends on the rights of the SQL Server service startup account. If you are unable to back up to a network share, check that the service startup account has write rights to that share.
 Error: Cannot open the backup device:
Sol: 
Either the specified location is missing or the service account under which the SQL Agent is running does not have the permissions on that folder.
 Common Errors in Restore:
Error 3205 – Too many backup devices specified for backup or restore;
Sol:
The most common cause for this error is because you are trying to restore a SQL Server 2005 backup on a SQL Server 2000 instance
 Error 4305 – an earlier transaction log backup is required
Sol:
There are one or more transaction log backups that need to be restored before the current transaction log backup. Using LSN number we can identify the prior log backups.

214. How to perform the tail log backup?
As normal log backup we can perform the tail log backup. We have two options to consider
WITH NORECOVERY:  When database online and you are planning to perform a restore after the tail log backup. It takes the database in restoring mode to make sure that no transactions performed after the tail log.
WITH CONTINUE_AFTER_ERROR: When database offline and does not starts. Remember we can only perform the log backup on damaged database when the log files are not damaged

215. What is the difference between NO_LOG and TRUNCATE_ONLY?
Both removes the inactive part of the log without making a backup copy of it and truncates the log by discarding all but the active log. This option frees space. NO_LOG and TRUNCATE_ONLY are synonyms.
After truncating the log using either NO_LOG or TRUNCATE_ONLY, the changes recorded in the log are not recoverable. For recovery purposes, immediately execute BACKUP DATABASE to take a full or full differential backup. Always try to avoid running the truncating as it breaks the log chain, Until the next full or full differential backup, the database is not protected from media failure.

216. Consider a situation where I have to take a backup of one database of 60 GB. My hard drive lacked sufficient space at that moment. I don’t find 64GB free on any drive. Fortunately, I have 3 different drives where I can hold 20 GB on each drive. How can you perform the backup to three different drives? How can you restore those files? Is this really possible?
Yes it is possible. We can split the backup files into different places and the same can be restored.
  • BACKUP DATABASE AdventureWorks
    TO DISK = ‘D:\Backup\MultiFile\AdventureWorks1.bak’,
    DISK = ‘E:\Backup\MultiFile\AdventureWorks2.bak’,
    DISK = ‘F:\Backup\MultiFile\AdventureWorks3.bak’
  • RESTORE DATABASE [AdventureWorks]
    FROM DISK = N’D:\Backup\MultiFile\AdventureWorks1.bak’,
    DISK = N’E:\Backup\MultiFile\AdventureWorks2.bak’,
    DISK = N’F:\Backup\MultiFile\AdventureWorks3.bak’

217. What is piecemeal Restore?
Consider we have a database of 3 TB where as on primary file group is a read write filegroup of size 500 GB and we have other files groups which are read-only of size  2.5 TB. We actually need not perform backup for read-only file groups, here we can perform partial backups.
Piecemeal restore process allows us to restore the primary filegroup first and the database can be online and the remaining filegroups can be restored while the recovery the transactions are running on primary
File group. Mostly suitable for data warehousing databases.

218. Have you ever perform the backup using T-SQL? Can you explain about different backup options?
BACKUP [DATABASE/LOG] <File/FileGroup>
TO <Backup Device>
MIRROR TO <Backup performed to different locations>
MIRROR TO <>
MIRROR TO < Only 3 mirrors can be specified >
WITH <Options>
Below are the General WITH options
  • Backup Set Options
  • COPY_ONLY  – Full backup on full recovery mode db’s. No chain breaking
  • COMPRESSION | NO_COMPRESSION – DB compression
  • DESCRIPTION
  • NAME
  • PASSWORD – Can assign a pwd, same password required to restore it
  • EXPIREDATE – Expires after the given date
  • RETAINDAYS – number of days that must elapse before this backup media set can be overwritten
  • Media Set Options
  • NOINIT | INIT – Overwrite | Append
  • NOSKIP | SKIP – Check Backupset expiration before overwritten | No checks
  • NOFORMAT | FORMAT -
  • MEDIADESCRIPTION
  • MEDIANAME
  • MEDIAPASSWORD
  • BLOCKSIZE
  • Data Transfer Options
  • BUFFERCOUNT
  • MAXTRANSFERSIZE
  • Error Management Options
  • NO_CHECKSUM | CHECKSUM
  • STOP_ON_ERROR | CONTINUE_AFTER_ERROR – Instructs BACKUP to fail if a page checksum does not verify | Continue after error
  • Compatibility Options
  • RESTART
  • Monitoring Options
  • STATS  – Shows Percentage completed
  • Tape Options
  • REWIND | NOREWIND
  • UNLOAD | NOUNLOAD
  • Log-specific Options
  • NORECOVERY – Performs tal log and leave db in restoring mode
  • STANDBY – Performs a backup and leave db in read only mode
  • NO_TRUNCATE – Specifies that the log not be truncated and causes the Database Engine to attempt the backup regardless of the state of the database

219. Have you ever perform the restore using T-SQL? Can you explain about different restore options?
RESTORE [DATABASE/LOG] <File/FileGroup>
FROM  <Backup Device>
WITH <Options>
Below are the general WITH options
  • MOVE ‘logical_file_name_in_backup’ TO ‘operating_system_file_name’
  • REPLACE
  • RESTART
  • RESTRICTED_USER
  • Backup Set Options
  • FILE
  • PASSWORD
  • Media Set Options
  • MEDIANAME
  • MEDIAPASSWORD
  • BLOCKSIZE
  • Data Transfer Options
  • BUFFERCOUNT
  • MAXTRANSFERSIZE
  • Error Management Options
  • CHECKSUM | NO_CHECKSUM
  • STOP_ON_ERROR | CONTINUE_AFTER_ERROR
  • Monitoring Options
  • STATS [ = percentage ]
  • Tape Options
  • REWIND | NOREWIND }
  • UNLOAD | NOUNLOAD }
  • Replication_WITH_option
  • KEEP_REPLICATION
  • Point_in_time_options
  • STOPAT = { ‘datetime’| @datetime_var }
  • STOPATMARK = { ‘lsn:lsn_number’ }
  • STOPBEFOREMARK

220. Can you restore master database? If yes how?
All server level information stored in master database that includes logins information etc. Schedule a regular backup for master database and below is the process to restore a master db.
  • Start the sql server in single user mode (-m)
  • Issue the restore command with replace from SQLCMD prompt
  • RESTORE DATABASE master FROM <backup_device> WITH REPLACE
  • Restart the sql server in normal mode
  • All databases as available at the time of master db backup must be attached as everything is tracked in master database.
  • If any databases are missing we can manually attach the mdf-ldfs.

221. How can we rebuild the system databases?
We usually rebuild the system databases when they are corrupted.
Rebuild deletes the databases and recreates it hence all the existing information is vanished.
Before rebuild:
  • Locate all recent backup of system databases
  • Make a note on mdf and ldf file locations, server configuration, Build /hotfix /sp applied
Rebuild:
  • Locate the Sql Server installation bits and run the command setup.exe fro command prompt by passing the argument as “/ACTION=REBUILDDATABASE”
  • Review the summary.txt once the rebuild completes
Post Rebuild:
  • Restore all the system databases from existing backups
  • Move the system databases mdf/ldf files to the actual locations

222. How can we rebuild Resource system database?
In Sql Server 2008 from installation wizard from left navigation pane select “Maintenance” and Click on Repair. It rebuilds the resource database.

223. As a database is recovering, after which phase will the database be available/online?
SQL Server 2000: After the Undo phase.
SQL Server 2005/2008:  In all editions but Enterprise, after the Undo phase (if running FULL recovery model). With Enterprise edition, after the Redo phase. Fast recovery is possible because transactions that were uncommitted when a crash occurred reacquire whatever locks they held before the crash. While these transactions are being rolled back, their locks protect them from interference by users.
On the Enterprise Edition of SQL Server 2005/2008, users are allowed access after REDO. So the point is REDO phase is done first.

224. How do you respond to the increasing transaction log file?
Alternatives for responding to a full transaction log include:
  • Backing up the log.
  • Freeing disk space so that the log can automatically grow.
  • Adding a log file on a different disk.
  • Completing or killing a long-running transaction.
  • Moving the log file to a disk drive with sufficient space.
  • Increasing the size of a log file.
  • Shrinking the log file.

225. How to rebuild the system databases in SQL 2008?
  • First we need to confirm that the master database is corrupted. We cannot restart SQL Server without the MASTER database. By checking the error logs  we can confirm that master database is corrupted.
  • To rebuild the master database we have to use setup.exe from command prompt. There is no much difference between 2005 and 2008 except few command line switches.
  • Find the setup.exe file (C:\……………………….\100\Setup BootStrap\Release\setup.exe)
  • Run the below command from dos prompt
c:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Release>setup.exe
/QUIET
/ACTION=REBUILDDATABASE
/INSTANCENAME=<Default / Named>
/SQLSYSADMINACCOUNTS= <Service Account>
[/SAPWD=<Mandatory when using Mixedmode>]
[/SQLCOLLATION=<Give new collation if you want to change default>]
  • When setup has completed rebuilding the system databases, it will return to the command prompt with no messages (It always first prints out the version). Examine the “Summary” log file (100\setup bootstrap\logs) to verify it was completely successful.

226. Do we need installation DVD or complete binaries to rebuild system databases?
The answer is NO. In 2008 this is really a nice enhancement. While installing SQL Server these system database files are copied to the location (<MSSQL.InstanceName>\MSSQL\Binn\Templates)
In this directory you will find master, model, and msdb database and log files that were copied from your installation source as part of setup. Hence when we rebuild the databases the setup uses these files to rebuild the databases.

227. What should we do if cannot find these database files at ….Templates\ location?
There are two options available.
  • Use the Repair feature of Setup (Available from the Maintenance option of the SQL Server Installation Center installed your machine)
OR
  • Manually copy the necessary file(s) yourself. On your media source find the directory of your platform (x86, x64, or ia64). Then go to the following directory:
setup\sql_engine_core_inst_msi\PFiles\SqlServr\MSSQL.X\MSSQL\Binn\Template
 Once you have copied the file into the templates directory or repairing, re-run setup with the syntax I’ve described above.

228. Can we rebuild resource database?
Yes! To rebuild these database files you would need to run Repair from the Installation Center.

229. Can we rebuild msdb?
Yes! We can directly restore it from a valid backup. If there is no valid backup available, restore all system databases as described above.

230. What if I have applied Updates / HotFixes after installation?
As with SQL Server 2005, if for any reason you rebuild system databases or repair the resource database, you should apply your latest update even if you restore backups of system databases.

231. How to restore Master or Msdb database from a valid backup?
  • Stop and Start the SQL Server Database Engine in Singe User Mode (Using parameter –m)
  • Restore the Master Database from SQLCMD prompt
  • From dos prompt using SQLCMD connect to the sql server and run the restore script
  • RESTORE DATABASE MASTER FROM DISK=’D:\MASTER_FULL.BAK’ WITH REPLACE
  • Stop and start the SQL Server Database Engine in normal mode
  • Restore MSDB Database
  • Connect to management studio and run the restore script for msdb
  • RESTORE DATABASE MSDB FROM DISK=’D:\MSDB_FULL.BAK’ WITH REPLACE

232. What is the difference between Hot and Cold Backup?
Performing backup while the database is online is called Hot backup. Stopping SQL server service and copying MDF and LDF files is called cold backup which is not really happens in production.

233. What is the Guest user account in SQL Server?  What login is it mapped to it?  
The Guest user account is created by default in all databases and is used when explicit permissions are not granted to access an object.  It is not mapped directly to any login, but can be used by any login.  Depending on your security needs, it may make sense to drop the Guest user account, in all databases except Master and TempDB

234. What is the use of BUILTIN\Administrators Group in SQL Server?
Any Windows login in BUILTIN\Administrators group is by default a SQL Server system administrator. This single group can be used to manage administrators from a Windows and SQL Server perspective

235. We have a list of 3 SQL Server logins which are dedicated to a critical application. We have given all required rights to those logins. Now my question is we have to restrict the access only to these three logins. Means there are two conditions:
a) No other user should be able to access the database except those three logins
b) Even for those three logins they should be able to run their queries only through the application. If someone login through SSMS and trying to run a query should result into a failure.
Finally there should be only way to running a query is from their application using one of those three logins, there should be no other way to run queries on that database. How do you restrict?
  • Do not give access to any other login on that database except for those 3 app logins.
  • Create a trigger that test each and every query like below
IF app_name() in(‘SQL Query Analyzer’,’Microsoft SQL Server Management Studio’)
raiserror(…..)
Return

236. How to resolve the orphan user problem?
Such a user is said to be an orphaned user of the database on that server instance. A database user can become orphaned if the corresponding SQL Server login is dropped. Also, a database user can become orphaned after a database is restored or attached to a different instance of SQL Server. Orphaning can happen if the database user is mapped to a SID that is not present in the new server instance.
  • To find out the orphan users
USE <database_name>;
 GO;
 sp_change_users_login @Action='Report';
 GO;

  • To resolve the orphan user problem
USE <database_name>;
 GO
 sp_change_users_login @Action='update_one',
@UserNamePattern='<database_user>',
 @LoginName='<login_name>';
 GO

237. What are the fixed server level roles?
  • SysAdmin – Can perform any activity
  • ServerAdmin – Can change server configuration, restart, shutdown server
  • SecurityAdmin – Can manage server level logins, also can manage db level if they have permission on db
  • Granted: ALTER ANY LOGIN
  • ProcessAdmin – Can kill a process on an instance
  • Granted: ALTER ANY CONNECTION, ALTER SERVER STATE
  • DiskAdmin – Can manage the disk files
  • Granted: ALTER RESOURCES
  • BulkAdmin – Can perform BULK INSERT
  • Granted: ADMINISTER BULK OPERATIONS
  • SetupAdmin – Can add and remove linked servers
  • Granted: ALTER ANY LINKED SERVER
  • Dbcreator – Can create, alter, drop and restore any database on the instance
  • Granted: CREATE ANY DATABASE
  • Public – Default role for newly created login
sp_helpsrvrolemember : List out the members mapped with the server roles

238. What are the Database roles?
  • db_accessadmin – Granted: ALTER ANY USER, CREATE SCHEMA, Granted with Grant option – Connect
  • db_backupoperator – Granted: BACKUP DATABASE, BACKUP LOG, CHECKPOINT
  • db_datareader – Granted – SELECT
  • db_datawriter – Granted – INSERT, UPDATE and DELETE
  • db_ddladmin – Granted – Any DDL operation
  • db_denydatareader – Denied – SELECT
  • db_denydatawriter – Denied – INSERT, UPDATE and DELETE
  • db_owner – Granted with GRANT option: CONTROL
  • db_securityadmin – Granted: ALTER ANY APPLICATION ROLE, ALTER ANY ROLE, CREATE SCHEMA, VIEW DEFINITION
  • dbm_monitor – Granted: VIEW most recent status in Database Mirroring Monitor
sp_helprolemember : List out the members mapped with the server roles
Note: Fixed database roles are not equivalent to their database-level permission. For example, the db_owner fixed database role has the CONTROL DATABASE permission. But granting the CONTROL DATABASE permission does not make a user a member of the db_owner fixed database role.

239. What are the security related catalog views?
Where the security related information stored on?
  • Server Level:
  • Sys.server_permissions
  • Sys.server_principals
  • Sys.server_role_members
  • Sys.sql_logins
  • Database Level:
  • Sys.database_permissions
  • Sys.database_principals
  • Sys.database_role_members

240. What are the extra roles available in msdb?
  • db_ssisadmin: Equals to sysadmin
  • db_ssisoperator: Import/Delete/Change Role of own packages
  • db_ssisltduser: Only can view and execute the packages
  • dc_admin : Can administrate and use the data collector
  • dc_operator: Can administrate and use the data collector
  • dc_proxy : Can administrate and use the data collector
  • PolicyAdministratorRole: can perform all configuration and maintenance activities on Policy-Based Management policies and conditions.
  • ServerGroupAdministratorRole : Can administrate the registered server group
  • ServerGroupReaderRole: Can view and the registered server group
  • dbm_monitor: Created in the msdb database when the first database is registered in Database Mirroring Monitor

241. If you lose rights to your SQL Server instance what are the options to connect to SQL SERVER Instance?
Option1: Use the Dedicated Administrator Connection
Option2: Use BUILTIN\Administrators Group
Option3: Change Registry Values

242. What objects does the fn_my_permissions function reports on?
  • SERVER
  • DATABASE
  • SCHEMA
  • OBJECT
  • USER
  • LOGIN
  • ROLE
  • APPLICATION ROLE
  • TYPE
  • MESSAGE TYPE
  • ASYMMETRIC KEY
  • SYMMETRIC KEY
  • CERTIFICATE
  • SERVICE
  • REMOTE SERVICE BINDING
  • FULLTEXT CATALOG
  • ASSEMBLY
  • CONTRACT
  • ENDPOINT
  • ROUTE
  • XML SCHEMA COLLECTION
SELECT * FROM fn_my_permissions(NULL, ‘SERVER’);
SELECT * FROM fn_my_permissions(‘AdventureWorks’, ‘DATABASE’);
SELECT * FROM fn_my_permissions(‘Employee’, ‘OBJECT’)

243. Name three of the features managed by the Surface Area Configuration tool.
  • Ad-hoc remote queries
  • Common language runtime
  • Dedicated Administrator Connection
  • Database Mail
  • Native XML Web Services
  • OLE Automation
  • Service Broker
  • SQL Mail
  • Web Assistant
  • xp_cmdshell

244. What options are available to audit login activity?
  • Custom solution with your application to log all logins into a centralized table
  • Enable login auditing at the instance level in Management Studio
  • Execute Profiler to capture logins into the instance
  • Leverage a third party product

245. How to perform backup for Certificates in sql server?
  • Using Native Backup
  • Using Backup Certificate Command

246. Name 3 of the features that the SQL Server built-in function LOGINPROPERTY performs on standard logins.
  • Date when the password was set
  • Locked out standard login
  • Expired password
  • Must change password at next login
  • Count of consecutive failed login attempts
  • Time of the last failed login attempt
  • Amount of time since the password policy has been applied to the login
  • Date when the login was locked out
  • Password hash

247. How can SQL Server instances be hidden?
To hide a SQL Server instance, we need to make a change in SQL Server Configuration Manager. To do this launch SQL Server Configuration Manager and do the following: select the instance of SQL Server, right click and select Properties. After selecting properties you will just set Hide Instance to “Yes” and click OK or Apply. After the change is made, you need to restart the instance of SQL Server to not expose the name of the instance.

248. Is Profiler the only tool that has the ability to audit and identify DDL events?
No. In SQL Server 2005 DDL triggers were introduced to audit CREATE, ALTER and DROP events for relational (stored procedures, functions, views, etc.) and security (certificates, logins, server, etc.) objects.

249. What are some of the pros and cons of not dropping the SQL Server BUILTIN\Administrators Group?
Pros:
  • Any Windows login is by default a SQL Server system administrator
  • This single group can be used to manage SQL Server from a system administrators perspective
Cons:
  • Any Windows login is by default a SQL Server system administrator, which may not be a desired situation

250. What is SQL Injection and why is it a problem?
SQL Injection is an exploit where unhandled\unexpected SQL commands are passed to SQL Server in a malicious manner.  It is a problem because unknowingly data can be stolen, deleted, updated, inserted or corrupted.

251. How can SQL Injection be stopped?
Development\DBA
  • Validate the SQL commands that are being passed by the front end
  • Validate the length and data type per parameter
  • Convert dynamic SQL to stored procedures with parameters
  • Prevent any commands from executing with the combination of or all of the following commands: semi-colon, EXEC, CAST, SET, two dashes, apostrophe, etc.
  • Based on your front end programming language determine what special characters should be removed before any commands are passed to SQL Server
Network Administration
  • Prevent traffic from particular IP addresses or domains
  • Review the firewall settings to determine if SQL Injection attacks can prevented
  • Remove old web pages and directories that are no longer in use because these can be crawled and exploited

252. How to recover from SQL Injection?
If for some reason the resolution implemented does not resolve the problem and the SQL Injection attack occurs again, the quickest path may be to do the following:
  • Shut down the web sites
  • Review the IIS logs to determine the commands issued and which web page\command has the vulnerability
  • Convert the code to determine which tables were affected and the command issued
  • Find and replace the string in your tables
  • Correct the web page\command that has the vulnerability
  • Test to validate the issue no longer occurs
  • Deploy the web page\command
  • Re-enable the web sites

253. How to enforce Security in SQL SERVER?
By providing strong Passwords, Limited the access to make sure right people have access to the right data, Creating Customized database roles, server roles and assign privileges and by choosing the correct authentication mode etc.
A DBA should be careful in providing security…..General precautions includes:
  • Minimize the number of sysadmins allowed to access SQL Server.
  • Give users the least amount of permissions they need to perform their job.
  • Use stored procedures or views to allow users to access data instead of letting them directly access tables.
  • When possible, use Windows Authentication logins instead of SQL Server logins.
  • Don’t grant permissions to the public database role.
  • Remove user login IDs who no longer need access to SQL Server.
  • Avoid creating network shares on any SQL Server.
  • Turn on login auditing so you can see who has succeeded, and failed, to login.
  • Ensure that your SQL Servers are behind a firewall and are not exposed directly to the Internet.
  • Using server, database and application roles to control access to the data
  • Securing the physical database files using NTFS permissions
  • Using an un guessable SA password
  • Restricting physical access to the SQL Server
  • Disabling the Guest account
  • Isolating SQL Server from the web server
  • Choose either of the service to run SQL Server (Local User – Not an Admin , Domain User – Not an Admin)
  • Restrict the remote administration (TC)
  • If SQL Server authentication is used, the credentials are secured over the network by using IPSec or SSL, or by installing a database server certificate.
  • Do not use DBO users as application logins
  • Firewall restrictions ensure that only the SQL Server listening port is available on the database server.
  • Remove the SQL guest user account.
  • Remove the BUILTIN\Administrators server login.
  • Apply the latest security updates / patches
We have plenty of features in SQL SERVER to enforce the security. The major features include:
  • Password policies
  • Encryption
  • Limited metadata visibility (system Tables to Catalog Views)
  • DDL triggers
  • User-schema separation
  • Impersonation
  • Granular permission sets
  • Security catalog views
In addition to these features we have some more added in SQL SERVER 2008, like Policy Based Management, Security Audit, Improved Encryption, Backup Security etc.
When we talk about the security we have to consider the bellow
  • Patches and Updates
  • Services
  • Protocols
  • Accounts
  • Files and Directories
  • Shares
  • Ports
  • Registry
  • Auditing and Logging
  • SQL Server Security
  • SQL Server Logins, Users, and Roles
  • SQL Server Database Objects

254. You are delegating permissions on your SQL Server 2005 server to other administrators. You have local, single server jobs on one server that you would like to allow another administer to start, stop, and view the history for, but not delete history. This administrator will own the jobs. Which role should you assign?
SQLAgentUserRole
SQL Server 2005 provides 3 fixed roles for the agent service that limit privileges for administrators. The SQLAgentUserRole is designed for local jobs (not multiserver) that allow the member to work with their owned jobs (edit, start, stop, view history) without deleting the history of any job.

255. What are the concurrent problems occur in accessing database?
Tr – Transaction
R – Resource
Uncommitted dependency/dirty reads:
Tr1 Updates Data R1
Tr2 Reads Data R1
Tr1 Rollback the Update Operation R1
Now Tr2 has the inconsistent data or wrong data.
Inconsistent Analysis/non-repeatable reads
Tr1 Reads Data R1
Tr2 Updates Data R1
Tr1 Again Reads the same data R1
Wrong match between first Tr1 and Second time Tr1
Phantom reads (via insert/delete)
Tr1 Reads Data (Result Contains 10 Records R1
Tr2 Insert/Delete Data (insert 6 new delete 1 Record) R1
Tr1 Again reads the same data R1
In Second time Tr1 we found 6 New Records and we can’t find a record which retrieves in first time…..

256. What isolation levels will provide completely read-consistent views of a database to all transactions?
SQL Server 2000: Only the SERIALIZABLE isolation level will provide a completely read-consistent view of a database to a given transaction. In any of the other isolation levels, you could perceive some/all of the following, depending on the isolation level running in:
    • Uncommitted dependency/dirty reads
    • Inconsistent Analysis/non-repeatable reads
    • Phantom reads (via insert/delete)
SQL Server 2005 and above: Both the SERIALIZABLE and SNAPSHOT isolation levels will provide a completely read-consistent view of a database to a given transaction. In any of the other isolation levels, you could perceive some/all of the following, depending on the isolation level running in:
    • Uncommitted dependency/dirty reads
    • Inconsistent Analysis/non-repeatable reads
    • Phantom reads (via insert/delete)

257. Within the READ_COMMITTED isolation level, during a read operation how long are locks held/retained for?
When SQL Server executes a statement at the read committed isolation level, it acquires short lived share locks on a row by row basis. The duration of these share locks is just long enough to read and process each row; the server generally releases each lock before proceeding to the next row. Thus, if you run a simple select statement under read committed and check for locks, you will typically see at most a single row lock at a time. The sole purpose of these locks is to ensure that the statement only reads and returns committed data. The locks work because updates always acquire an exclusive lock which blocks any readers trying to acquire a share lock.
258. Within the REPEATABLE_READ and SERIALIZABLE isolation levels, during a read operation and assuming row-level locking, how long are locks held/retained for?
Within either of these isolation levels, locks are held for the duration of the transaction, unlike within the READ_COMMITTED isolation level as noted above.

259. Can locks ever be de-escalated?
No, locks are only escalated, never de-escalated.

260. What are the different types of lock modes in SQL Server?
  • Shared
  • Update
  • Exclusive
  • Schema (modification and stability)
  • Bulk Update
  • Intent (shared, update, exclusive)
  • Key Range (shared, insert, exclusive)

261. Can you explain scenarios where each type of lock would be taken?
SHARED: Used for read operations that do not change or update data, such as a SELECT statement.
UPDATE: Update locks (U) are acquired just prior to modifying the data. If a transaction modifies a row, then the update lock is escalated to an exclusive lock; otherwise, it is converted to a shared lock. Only one transaction can acquire update locks to a resource at one time. Using update locks prevents multiple connections from having a shared lock that want to eventually modify a resource using an exclusive lock. Shared locks are compatible with other shared locks, but are not compatible with Update locks.
EXCLUSIVE: Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.
INTENT: Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX). (Another question in the Difficult level section expands on this)
SCHEMA: Used when an operation dependent on the schema of a table is executing. The types of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).
BULK UPDATE: Used when bulk copying data into a table and the TABLOCK hint is specified.
KEY RANGE: Protects the range of rows read by a query when using the serializable transaction isolation level. Ensures that other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again.

262. What is lock escalation and what triggers it?
The process of converting many fine-grained locks into fewer coarse-grained locks is known as Lock Escalation.
** Escalation reduces system resource consumption/overhead while increasing the possibility of concurrency conflicts
**To escalate locks, the Database Engine attempts to change the intent lock on the table to the corresponding full lock, for example, changing an intent exclusive (IX) lock to an exclusive (X) lock, or an intent shared (IS) lock to a shared (S) lock). If the lock escalation attempt succeeds and the full table lock is acquired, then all heap or B-tree, page (PAGE), key-range (KEY), or row-level (RID) locks held by the transaction on the heap or index are released. If the full lock cannot be acquired, no lock escalation happens at that time and the Database Engine will continue to acquire row, key, or page locks.
**Lock escalation is triggered at either of these times:
  • When a single Transact-SQL statement acquires at least 5,000 locks on a single table or index.
  • When the number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.
  • If locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired.

263. Name as many of the lockable resources as possible in SQL Server?
  • RID (single row on a heap)
  • KEY (single row (or range) on an index)
  • PAGE
  • EXTENT
  • HOBT (heap or b-tree)
  • TABLE (entire table, all data and indexes)
  • FILE
  • APPLICATION
  • METADATA
  • ALLOCATION_UNIT
  • DATABASE

264. What requirements must be met for a BULK-UPDATE lock to be granted, and what benefit do they server?
The Database Engine uses bulk update (BU) locks when bulk copying data into a table, and either the TABLOCK hint is specified or the table lock on bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow multiple threads to bulk load data concurrently into the same table while preventing other processes that are not bulk loading data from accessing the table.

265. What is the least restrictive type of lock? What is the most restrictive?
The least restrictive type of lock is a shared lock. The most restrictive type of lock is a schema-modification

266. What is a deadlock and how is it different from a standard block situation?
A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. In a deadlock situation, both transactions in the deadlock will wait forever unless the deadlock is broken by an external process – in a standard blocking scenario, the blocked task will simply wait until the blocking task releases the conflicting lock scenario.

267. Which 2 isolation levels support optimistic/row-versioned-based concurrency control?
First is the READ COMMITTED isolation level.  This is the only level that supports both a pessimistic (locking-based) and optimistic (version-based) concurrency control model. Second is SNAPSHOT isolation level that supports only an optimistic concurrency control model.

268. What database options must be set to allow the use of optimistic models?
READ_COMMITTED_SNAPSHOT option for the read committed optimistic model. ALLOW_SNAPSHOT_ISOLATION option for the snapshot isolation level

269. What is the size of a lock structure?
96 bytes

270. In what circumstances will you see key-range locks, and what are they meant to protect against?
You will only see key-range locks when operating in the SERIALIZABLE isolation level.
  • Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement. The serializable isolation level requires that any query executed during a transaction must obtain the same set of rows every time it is executed during the transaction. A key range lock protects this requirement by preventing other transactions from inserting new rows whose keys would fall in the range of keys read by the serializable transaction.
  • Key-range locking prevents phantom reads. By protecting the ranges of keys between rows, it also prevents phantom insertions into a set of records accessed by a transaction.

271. Explain the purpose of INTENT locks?
The Database Engine uses intent locks to protect placing a shared (S) lock or exclusive (X) lock on a resource lower in the lock hierarchy. Intent locks are named intent locks because they are acquired before a lock at the lower level, and therefore signal intent to place locks at a lower level. Intent locks serve two purposes:
  • To prevent other transactions from modifying the higher-level resource in a way that would invalidate the lock at the lower level.
  • To improve the efficiency of the Database Engine in detecting lock conflicts at the higher level of granularity.

272. Can deadlocks occur on resources other than database object?
YES.
273. What are the different types of resources that can deadlock?
Deadlock is a condition that can occur on any system with multiple threads, not just on a relational database management system, and can occur for resources other than locks on database objects. Here are the resources:
LocksWaiting to acquire locks on resources, such as objects, pages, rows, metadata, and applications can cause deadlock.
Worker threadsA queued task waiting for an available worker thread can cause deadlock. If the queued task owns resources that are blocking all worker threads, a deadlock will result
MemoryWhen concurrent requests are waiting for memory grants that cannot be satisfied with the available memory, a deadlock can occur.
Parallel query execution-related resourcesCoordinator, producer, or consumer threads associated with an exchange port may block each other causing a deadlock usually when including at least one other process that is not a part of the parallel query. Also, when a parallel query starts execution, SQL Server determines the degree of parallelism, or the number of worker threads, based upon the current workload. If the system workload unexpectedly changes, for example, where new queries start running on the server or the system runs out of worker threads, then a deadlock could occur.
Multiple Active Result Sets (MARS) resourcesResources used to control interleaving of multiple active requests under MARS, including:
  • User resource – when a thread is waiting for a resource that is potentially controlled by a user application, the resource is considered to be an external or user resource and is treated like a lock
  • Session mutex – The tasks running in one session are interleaved, meaning that only one task can run under the session at a given time. Before the task can run, it must have exclusive access to the session mutex.
  • Transaction mutex – All tasks running in one transaction are interleaved, meaning that only one task can run under the transaction at a given time. Before the task can run, it must have exclusive access to the transaction mutex.

274. Explain how the database engine manages the memory footprint for the lock pool when running in a dynamic lock management mode.
SQL Server 2000: When the server is started with locks set to 0, the lock manager allocates two percent of the memory allocated to SQL Server to an initial pool of lock structures. As the pool of locks is exhausted, additional locks are allocated. The dynamic lock pool does not allocate more than 40 percent of the memory allocated to SQL Server.
  • Generally, if more memory is required for locks than is available in current memory, and more server memory is available (the max server memory threshold has not been reached), SQL Server allocates memory dynamically to satisfy the request for locks. However, if allocating that memory would cause paging at the operating system level (for example, if another application was running on the same computer as an instance of SQL Server and using that memory), more lock space is not allocated.
SQL Server 2005: When running in dynamic management mode (i.e. if the server is started with locks configuration option set to 0), the lock manager acquires sufficient memory from the Database Engine for an initial pool of 2,500 lock structures. As the lock pool is exhausted, additional memory is acquired for the pool.
  • Generally, if more memory is required for the lock pool than is available in the Database Engine memory pool, and more computer memory is available (the max server memory threshold has not been reached), the Database Engine allocates memory dynamically to satisfy the request for locks. However, if allocating that memory would cause paging at the operating system level (for example, if another application is running on the same computer as an instance of SQL Server and using that memory), more lock space is not allocated. The dynamic lock pool does not acquire more than 60 percent of the memory allocated to the Database Engine. After the lock pool has reached 60 percent of the memory acquired by an instance of the Database Engine, or no more memory is available on the computer, further requests for locks generate an error.

275. What are the different isolation levels available?
  • Read Uncommitted Isolation Level
  • Read Committed Isolation Level
  • Repeatable Read Isolation Level
  • Serializable Isolation Level
  • Snapshot Isolation Level
  • Read Committed Snapshot Isolation Level

276. Demonstrate Isolation levels?
  • Read Uncommitted: This is the lowest isolation level. It only isolates transactions and activities to ensure that physically corrupt data is never read. It allows dirty reads, nonrepeatable reads, and phantom reads.
  • Read Committed: This isolation level does not permit dirty reads, but does allow nonrepeatable reads and phantom reads. This is the default isolation level for SQL Server, and is used for each connection to SQL Server unless one of the other isolation levels has manually been set for a connection.
  • Repeatable Read: This isolation level does not permit dirty reads or nonrepeatable reads, but does allow phantom reads.
  • Serializable Read: This is the highest isolation level and ensures that all transactions and statements are completely isolated from each other. It does not allow dirty reads, nonrepeatable reads, or phantom reads.

New isolation levels that introduced in SQL 2005 based on row versioning:
  • READ_COMMITTED_SNAPSHOT (statement level): READ_COMMITTED_SNAPSHOT is actually a variation of the default READ_COMMITTED isolation level. It uses row versioning, instead of locking, to provide read consistency at the SQL Server statement level. When a statement runs that specifies the READ_COMMITTED isolation level (the default isolation level), and the READ_COMMITTED_SNAPSHOT option is turned on at the database level, all statements see a snapshot of the data as it existed at the start of any current transaction. It uses the row-versioned snapshot of the row to return data, and no locking is needed by the statement, which is normally the case. The biggest benefit of this isolation level is that reads do not block writes and writes do not block reads. Writes can still block writes, but this is necessary to prevent data corruption.
  • ALLOW_SNAPSHOT_ISOLATION (transaction level): ALLOW_SNAPSHOT_ISOLATION is similar to READ_COMMITTED_SNAPSHOT, but it is based at the transaction level, not the statement level. When the ALLOW_SNAPSHOT_ISOLATION is turned on at the database level and the TRANSACTION ISOLATION LEVEL SNAPSHOT isolation level is turned on for the transaction (using the SET command), all statements see a snapshot of the data as it existed at the start of the transaction.

277. Any idea about row versioning?
The concept of row versioning is not new to SQL Server, as SQL Server has been using it for years with triggers. For example, when a DELETE trigger is executed for a row, a copy of that row is stored in the “deleted table” just in case the trigger is rolled back and the deleted row needs to be “undeleted.” In a sense, the row is versioned, and if need be, can be reused.
Row versioning for isolation levels is very similar, though not identical to row versioning for triggers. When a row versioning-based isolation level (which includes the two new ones we are now discussing) is enabled at the database level, the database engine maintains versions of each row that is modified (for an entire database). Whenever a transaction modifies any row, an image of the row before the modification is copied into a page of what is called the version store. The version store is located in the tempdb database and is used for temporary storage of versioned rows for all of the databases on a single SQL Server instance.

278. What are the properties of a transaction?
There are 4 properties called ACID.
Atomicity: All changes to data are performed as if they are a single operation. That is, all the changes are performed, or none of them are.
Example: In an application that transfers funds from one account to another, the atomicity property ensures that, if a debit is made successfully from one account, the corresponding credit is made to the other account.
Consistency: Data is in a consistent state when a transaction starts and when it ends.
Example: In an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both the accounts is the same at the start and end of each transaction.
Isolation: The intermediate state of a transaction is invisible to other transactions. As a result, transactions that run concurrently appear to be serialized.
Example: in an application that transfers funds from one account to another, the isolation property ensures that another transaction sees the transferred funds in one account or the other, but not in both, nor in neither.
Durability: After a transaction successfully completes, changes to data persist and are not undone, even in the event of a system failure.
Example: in an application that transfers funds from one account to another, the durability property ensures that the changes made to each account will not be reversed.

279. How to find out and prevent Deadlocks?
To find Deadlocks
Error:
Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
There are two popular ways to identifying the deadlocks
  • Enabling a Trace Flag
  • By default Deadlocks are not written into sql server errorlog, to do so we have to enable a trace flag.
  • Trace Flag 1204 – SQL Server 2000 or below
  • Trace Flag 1222 – SQL Server 2005 or above
  • Syntax: DBCC TRACEON (1222, -1)
Note: -1 indicates trace should run for all sessions
  • Using the profiler:
  • We need to capture the Lock Events Lock: Deadlock and Lock: Deadlock Chain along with the ObjectID data column
To prevent Deadlocks:
  • While updating have the application access server objects in the same order each time.
  • During transactions, don’t allow any user input. Collect it before the transaction begins.
  • Keep transactions as short as possible. To accomplish this when your application does need to read the same data more than once, cache it by storing it in a variable or an array, and then re-reading it from there, not from SQL Server.
  • Reduce lock time. Develop your application to grab locks at the latest possible time, and then releases them at the very earliest time.
  • If appropriate, reduce lock escalation by using the ROWLOCK or PAGLOCK.
  • Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often.
  • If appropriate, use low level isolation level according to the possibilities
  • Look for other opportunities to improve the efficiency of the queries
  • If both deadlock participants are using the same index, consider adding an index that can provide an alternate access path to one of the spids.

280. See I have an environment, Sunday night full backup, everyday night diff backup and every 45 min a transactional backup. Disaster happened at 2:30 PM on Saturday. You suddenly found that the last Sunday backup has been corrupted. What’s your recovery plan?
When you find that the last full backup is corrupted or otherwise unrestorable, making all differentials after that point useless. You then need to go back a further week to the previous full backup (taken 13 days ago), and restore that, plus the differential from 8 days ago, and the subsequent 8 days of transaction logs (assuming none of those ended up corrupted!).
If you’re taking daily full backups, a corrupted full backup only introduce an additional 24 hours of logs to restore.
Alternatively, a log shipped copy of the database could save your bacon (you have a warm standby, and you know the log backups are definitely good).
281. Full backup size is 300 GB, usually my diff backup size varies between 300 MB and 5 GB, one day unfortunately diff backup size was increased to 250 GB? What might be the reason any idea?
Are you the kind of DBA who rebuilds all indexes nightly? Your differential backups can easily be nearly as large as your full backup. That means you’re taking up nearly twice the space just to store the backups, and even worse, you’re talking about twice the time to restore the database.
To avoid these issues with diff backups , ideally schedule the index maintenance to happen right before the full backup.

282. What is .TUF file? What is the significance of the same? Any implications if the file is deleted?
.TUF file is the Transaction Undo File, which is created when performing log shipping to a server in Standby mode.
When the database is in Standby mode the database recovery is done when the log is restored; and this mode also creates a file on destination server with .TUF extension which is the transaction undo file.
This file contains information on all the modifications performed at the time backup is taken.
The file plays a important role in Standby mode… the reason being very obvious while restoring the log backup all uncommited transactions are recorded to the undo file with only commited transactions written to disk which enables the users to read the database. So when we restore next transaction log backup; SQL server will fetch all the uncommited transactions from undo file and check with the new transaction log backup whether commited or not.
If found to be commited the transactions will be written to disk else it will be stored in undo file until it gets commited or rolledback.
​If .tuf file is got deleted there is no way to repair logshipping except reconfiguring it from scratch.

283. I wanted to know what are the maximum worker threads setting and active worker thread count on sql server. Can you tell me how to capture this info? What’s the default value for max thread count?
We can check the current settings and thread allocation using the below queries.
–Thread setting
select max_workers_count from sys.dm_os_sys_info
–Active threads
select count(*) from sys.dm_os_threads

Default value is 255.
Increasing the number of worker threads may actually decrease the performance because too many threads causes context switching which could take so much of the resources that the OS starts to degrade in overall performance.

284. What is the difference between Push and Pull Subscription?
  • Push - As the name implies, a push subscription pushes data from publisher to the subscriber. Changes can be pushed to subscribers on demand, continuously, or on a scheduled basis. 
  • Pull - As the name implies, a pull subscription requests changes from the Publisher.  This allows the subscriber to pull data as needed.  This is useful for disconnected machines such as notebook computers that are not always connected and when they connect they can pull the data. 

285.  What are different replication agents and what’s their purpose?
  • Snapshot Agent- The Snapshot Agent is used with all types of replication. It prepares the schema and the initial bulk copy files of published tables and other objects, stores the snapshot files, and records information about synchronization in the distribution database. The Snapshot Agent runs at the Distributor. 
  • Log Reader Agent - The Log Reader Agent is used with transactional replication. It moves transactions marked for replication from the transaction log on the Publisher to the distribution database. Each database
    published using transactional replication has its own Log Reader Agent that runs on the Distributor and connects to the Publisher (the Distributor can be on the same computer as the Publisher) 
  • Distribution Agent - The Distribution Agent is used with snapshot replication and transactional replication. It applies the initial snapshot to the Subscriber and moves transactions held in the distribution database to Subscribers. The Distribution Agent runs at either the Distributor for push subscriptions or at the Subscriber for pull subscriptions. 
  • Merge Agent - The Merge Agent is used with merge replication. It applies the initial snapshot to the Subscriber and moves and reconciles incremental data changes that occur. Each merge subscription has its own Merge Agent that connects to both the Publisher and the Subscriber and updates both. The Merge Agent runs at either the Distributor for push subscriptions or the Subscriber for pull subscriptions. 
  • Queue Reader Agent - The Queue Reader Agent is used with transactional replication with the queued updating option. The agent runs at the Distributor and moves changes made at the Subscriber back to the
    Publisher. Unlike the Distribution Agent and the Merge Agent, only one instance of the Queue Reader Agent exists to service all Publishers and publications for a given distribution database. 

286. Does a specific recovery model need to be used for a replicated database?
Replication is not dependent on any particular recovery model. A database can participate in replication whether it is in simple, bulk-logged, or full. However how data is tracked for replication depends on the type of replication used.

287. What type of locking occurs during the Snapshot generation?
 Locking depends on the type of replication used:
    • In snapshot replication, the snapshot agent locks the object during the entire snapshot generation process.
    • In transactional replication, locks are acquired initially for a very brief time and then released. Normal operations on a database can continue after that.
    • In merge replication, no locks are acquired during the snapshot generation process.

288. What options are there to delete rows on the publisher and not on the subscriber?
  • One option is to replicate stored procedure execution instead of the actual DELETE command.  You can create two different versions of the stored procedures one on the publisher that does the delete and the other on the subscriber that does not do the delete.
  • Another option is to not replicate DELETE commands.

289.  Is it possible to run multiple publications and different type of publications from the same distribution database?
Yes this can be done and there are no restrictions on the number or types of publications that can use the same distribution database. One thing to note though is that all publications from a Publisher must use the same Distributor and distribution database.

290. Data is not being delivered to Subscribers, what can be the possible reasons?
There are a number of possible causes for data not being delivered to Subscribers:
    • The table is filtered, and there are no changes to deliver to a given Subscriber.
    • One or more agents are not running or are failing with an error.
    • Data is deleted by a trigger, or a trigger includes a ROLLBACK statement.
    • A transactional subscription was initialized without a snapshot, and changes have occurred on the Publisher since the publication was created.
    • Replication of stored procedure execution for a transactional publication produces different results at the Subscriber.
    • The INSERT stored procedure used by a transactional article includes a condition that is not met.
    • Data is deleted by a user, a replication script, or another application.

291. Explain what stored procedure sp_replcounters is used for?

Sp_replcounters is a system stored procedure that returns information about the transaction rate, latency, and first and last log sequence number (LSN) for each publication on a server. This is run on the publishing server. Running this stored procedure on a server that is acting as the distributor or subscribing to publications from another server will not return any data

No comments:

Post a Comment