Our application is heavily dependent on it. So this is a blocker right now.2,669 votes
Free text search is something that we are working on, although not in the near term horizon.
Not having a simple backup/restore for SQL Database is a key limiter. Import/export and create DB as copy of aren't sufficient for many application scenarios.2,153 votes
We are working on user controlled restore that exposes the system backups that SQL database uses today. I don’t have anything to announce today, but please stay tuned.
I script I maintain/exported from an on-premise site should just run on the cloud.
It doesn't matter if it implements some tokens as a No-OP or with warnings or notes. But it should compile unaltered and run on Azure.921 votes
In a DB as a service model, that Azure SQL database is, full t-SQL compatibility isn’t something that we can promise or deliver on in the short/medium term. We do have two great solutions for SQL in Windows Azure. SQL Server in a Windows Azure VM is the best choice for highest T-SQL compatibility. Azure SQL Database is the best choice when writing new apps, or when existing apps can be modified to run with the subset of T-SQL that Azure SQL Database supports.
Enabling TDE would go a long way to help placate business sponsors concerned about their data being housed in Microsoft's datacenters. Even though the cloud is arguably more secure than doing it yourself, turning on TDE would provide a warm fuzzy feeling to those not comfortable with the concept.835 votes
We have several improvements around encryption, including data at rest and during use. Today I don’t have a timeframe that I can share.
The ability to query (read-only) data from a difference database or linked server. This will allow to reference (join) some tables from the source database and build views that uses two or more SQL Azure databases.804 votes
We are looking to better understand what customers are trying to do here. Are folks looking for cross DB query, or cross DB movement of data, or fan-out query. I’d love to hear the scenarios that you’re looking at. Please email me at email@example.com if you’d like to help us define how we move forward in this space.
We understand the need for CLR stored procedures especially from a SQL Server compatibility perspective. This is an area of active work, although I don’t have a time-frame that we can share today.
In a DB-as-a-service model such as Azure SQL database, DMVs are the mechanism today to monitor what’s going on under the covers. We are adding DMVs as we expose more of the multi-tenant system functions. http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx has more details on DMVs available today.
We are looking at additional possibilities in the future to expose a tracing mechanism in the future.
1. Create a new Database.
2. Check an "Enable OData data services"
3. Get a non-code OData service over the database309 votes
I’d love to hear more about the scenarios that your looking for here. We are actively working in the DB encryption space including encrypted columns, a cloud equivalent of TDE. Please send me email with your scenarios to firstname.lastname@example.org
I want to utilise LINQ and TransactionScope in my web roles and DTC support in SQL Azure would solve this problem258 votes
We understand that there are a class of apps that could benefit from transaction coordination with SQL Database. We hope to look at this more in the future, however we don’t have immediate plans to ship a DTC for SQL Database. Customers should use eventual consistency techniques in the absence of DTC.
Service Broker should be available in Windows Azure.230 votes
Today we don’t have a cloud equivalent of SQL service broker outside of either service bus or SQL Server in a VM. I’d love to understand the scenarios that you have in mind. Please send me email at email@example.com if you’d like to continue the conversation.
Storing blobs in the database is fairly pointless when you have a dedicated blob store. Implement the FILESTREAM protocol to store blobs in a user's azure storage205 votes
We understand the scenario here and would love to enable. While this isn’t something that’s on the road map, it is something that we are working on.
Ability to create jobs that can run from 1 minute, 1 hour etc.
BTW Love SQL Azure keep up the good work guys!198 votes
Today in Windows Azure there are two possible alternatives, SQL Server in a VM, or the Windows Azure job scheduler http://www.windowsazure.com/en-us/services/scheduler/. Please send me email if you have scenarios that don’t work with either of these two options. Guyhay@microsoft.com
Following the guidance from
I attempted to rebuild a heavily fragmented index using Online=ON:
ALTER INDEX ALL ON [Backups] REBUILD WITH(ONLINE = ON)
And received the following error:
'ONLINE INDEX DDL WITH LARGE OBJECT' is not supported in this version of SQL Server.
My table contains a varbinary(max) field. This is a serious flaw. Please fix.175 votes
We are working on a number of scenarios that relate to index rebuilding, and index rebuilding with large objects. We’d like to here more about the scenarios. firstname.lastname@example.org
Add in Sql 2008 Change Tracking154 votes
We’d like to here more about the scenarios that you’re thinking about here, email@example.com
I understand that SELET INTO is not supported in SQL Azure standard tables because that will create a new table that doesnt have the clustered index needed to replicate the data correctly.
However, when creating tables in the tempdb you can create tables without a clustered index - and so the SELECT INTO command should be able to be used when the output table is stored in the TempDB.
99% of all my SELECT INTO are used for temporary tables (to increase speed in reporting and data processing)110 votes
We understand this issue, but have no immediate plans to address.
Allowing data compression would make it easier to fit within a 1GB/10GB limit and help improve I/O performance on large datasets, not to mention it helps with T-SQL/SQL Server compatibility.74 votes
We understand the issue of database size and SQL Database.
This could be done with the future cloning technique or perhaps a more sophisticated one?64 votes
We understand the scenario, and are working with the Windows Azure website team. We hope to be able to address this in the future.
I'm using the command SET CONTEXT_INFO to pass information about current User and Application (among others) to be used with insert and updating in all the tables. In SQL Azure this command is not available because, I think, it is used to set the session GUID:
"When the client application connects to SQL Azure, CONTEXT_INFO (Transact-SQL) is set with a unique session specific GUID value automatically. Retrieve this GUID value and use it in your application to trace the connectivity problems."51 votes
We understand the scenario and are working to implement. However I don’t have a date to share right now. Please stay tuned.
We would like to have an indicator that tells us how close our azure services are to being throttled. We don't need specific throttle metrics, but a general indicator (red/yellow/green) for each azure service (Sql Azure, Table storage, Blob Storage, etc.) that can be throttled.
With this information, we can scale down our usage of non-essential threads in favor of allowing mission-critical threads to continue processing rather than getting stopped cold when they are throttled (even with a retry policy, you're stopped/on hold until the throttle is removed).51 votes
We understand the scenario and are working on portal improvements. We don’t have a detailed time-frame to share today. Please stay turned.
- Don't see your idea?