MySQL LOAD DATA INFILE: Better Server, Worse Performance

MySQL LOAD DATA INFILE: Better Server, Worse Performance

I am testing out Microsoft Azure Database for MySQL and have run into a performance issue that I do not understand.
I launched a "Basic" server with 1 vCore (2 GB RAM, "Standard Storage"), which is their lowest possible tier of server. I created a database, a table, and imported about 4 million rows (30 GB) with LOAD DATA INFILE. It took 56 minutes.
Next, I launched a "Memory Optimized" server with 8 vCores (80 GB RAM, "Premium Storage"). I repeated the exact same tasks and loaded the exact same file. This time it took 7 hours and 16 minutes.
Better server, much worse performance (on this task) -- not what I was expecting. To be certain I had not made a mistake, I repeated the steps above, but I got almost the exact same results again.
I suspect the issue is that the Memory Optimized server has different default server parameters than the Basic server which make this task perform more slowly (I haven't changed the parameters from the defaults that Azure sets). But I am not sure which parameters are the culprit. If anyone has insight into this issue, I'd appreciate it.
Basic server parameters:
Memory Optimized server parameters:


Answer 1:

Here’s what seems to have been causing this behavior:

Per the Azure documentation, the Basic tier server on Azure comes with “variable” IOPS whereas the Memory Optimized server comes with a fixed IOPS which is based on the amount of storage assigned to the database server.

I had 100GB assigned to the Memory Optimized server. This resulted in it having 300 IOPS, in accordance with Azure’s 3 IOPS / GB ratio.

Presumably the “variable” IOPS on the Basic server ended up being significantly more than the 300 IOPS that the Memory Optimized server had.

Lesson learned: to get fast storage access on Azure Database, you need to assign plenty of storage capacity to your server (even if you don’t need that much storage!).

Answer 2:

Suggestion for your AWS Paramenters Group when you are LOADing millions of rows of data,

innodb_change_buffer_max_size=50  # from 25 for improved LOAD speed during high volume process

when done, back to 25% (or less) depending on your need for typical operation.

On your Memory Enhanced instance,

innodb_lru_scan_depth=100  # from 1024 to conserve 90% of CPU cycles used for function

For next test, these should reduce elapsed time.


How to setup a domain that doesn’t use www prefix on Azure/EC2

How to setup a domain that doesn’t use www prefix on Azure/EC2

Suppose I have a domain, where I have a series of pages like and etc.  I want to use the domain and provide links without a www prefix.  These pages also will have links to other pages (and each other) without the www prefix.
I'm using Azure to host our site which is the same as EC2 in that the domain cannot have an A record because the server environment is virtualized. So both Microsoft and Amazon tell you to use CNAME records to point your domain to an A record they manage.  In the case of Azure, my CNAME record points to a record (which microsoft manages):  CNAME

where is my domain I've setup on Azure.
I'm encountering all sorts of problems with various things like email delivery and various people not being able to load the site.  2 different nameserver providers and my hoster tells me that it violates the RFC to have a root domain be a CNAME entry which is why my problems are sporadic.
I'm surprised to hear this as you see so many sites these days hosted on EC2 or Azure, and do not use a www prefix.  How are these sites setup so that they can use just (without www prefix)?
Update: While I asked the question about EC2 or Azure, I'm specifically looking for info on Azure.


Answer 1:

Web sites running on EC2 can be served through zone apex or naked domain like “” with A records. The fact that the server is running in a virtual machine has no impact on that. Many sites do this including most of mine.

If you are using a service that requires a CNAME, like Amazon’s Elastic Load Balancer, then you cannot point to it with a zone apex or naked domain like “” as CNAMEs cannot be used with naked domains. This is a restriction of the DNS spec, not something related to cloud or virtualization implementations.

You can still use Amazon’s Elastic Load Balancer with a zone apex or naked domain like “” as long as you host your DNS for that zone using Amazon’s Route53 DNS service. Amazon does tricks to make an A record map dynamically to the results of what the CNAME would have returned, complying with the DNS spec while providing the flexibility and power that ELB needs to provide.

I don’t know anything about Azure. You are violating the DNS spec in how you’re trying to use it.

Answer 2:

You can do this if your DNS is hosted with either DNSimple using their ALIAS record type, or DNSMadeEasy using their ANAME record type.

Cloudflare’s DNS also allows this.

Also, if you use EC2, ELB and Amazon’s Route53 DNS service it’s also possible.


Azure Site-to-Site VPN with a Linux based router to bridge the VPN ports to a RRAS server while keeping NAT for other traffic

Azure Site-to-Site VPN with a Linux based router to bridge the VPN ports to a RRAS server while keeping NAT for other traffic

I am trying to get an Azure Site-to-Site VPN up and running using RRAS but require help configuring my router's iptables to bridge the VPN ports and protocols to the RRAS server without using NAT while still allowing NAT to be used for all other traffic.
I've been able to configure Azure and the VM correctly, and I can get the connection up and running with data flowing across the link (and shown in the Azure portal as connected with data going both ways). However the connection that is set up isn't usable, I can't ping or seem to make any form of connection between the Azure VM's and my local machines.
I believe it is due to Azure's Site-to-Site VPN requiring the local VPN gateway to be connected directly to the internet without going through a NAT firewall (which I have to use as it's my home broadband so only have one IP) so while a connection can be established the NAT on my router alters the packets in a way that means they are unable be routed correctly once reaching the RRAS server.
I have tried alternate Site-to-Site VPN solutions (OpenVPN, SoftEther, RRAS itself) but none work correctly all exhibiting the same problem whereby the two VM's hosting VPN connection are able to connect to everything, and my home network servers can be correctly routed to connect to the Azure side but I believe either due to restrictions on Azure VM's (only a single network adapter and no ability to enable promiscuous mode) or the Azure virtual network itself it means no other Azure VM is able to connect to my home network despite adding in static routes to go via the VPN server or adding it as an additional gateway.
I am using a Asus RT-AC68U router running the latest Merlin build so I am hoping I can use the Azure Site-to-Site VPN by changing it's iptables rules & the network configuration in the following way:

Leaving the existing iptables configuration largely untouched so NAT can continue to be used to allow my other local servers and workstations to continue to connect to the internet, or have ports forwarded to them.
The RRAS VM on my home network has two network adapters, one with the private IP, currently the other has a private IP but this would be changed to be my public IP as per Microsoft's recommendations.
Lastly, and the part I do not know how to accomplish is for the specific VPN ports and protocols (UDP 500, UDP 4500, UDP 1701 and ESP (protocol #50)) to be excluded from NAT and be bridged directly to the network adapter on the VM configured now setup with my home network's public IP.

The network looks like this currently:
Azure VM's -, VPN gateway using dynamic routing with a public IP.
Home network
VDSL Modem <- PPPOE bridge so the Router has the public IP -> Asus Router <- NAT ->
So this is what I am trying to get to so RRAS has public ip for VPN purposes only:

VDSL Modem <- PPPOE bridge so the Router has the public IP -> Asus Router

<- Bridge for the VPN ports and protocol -> RRAS VM's public adapter's MAC address
<- NAT for everything else ->

I'm open to other suggestions to get the Site-to-Site VPN working if there is a simpler solution.
[update 1]
I'm currently thinking of the following, I have given the RRAS Hyper-V VM the additional network adapter and assigned it to VLAN 635, promiscuous mode is enabled if for some reason it wants to change it's MAC address. I've then disabled all connection items other than the two Link-Layer Topology Discovery items and IPv4.
I've assigned the IPv4 settings the public IP Address, a subnet mask of and the gateway that is the same gateway of the ppp0 adapter in the router.
I've run the following on the router to attempt to direct any traffic trying to communicate with the Azure VPN gateway through the to the VLAN, and thus hopefully allowing it to be routed without using NAT:
/usr/sbin/ip link add link br0 name br0.635 type vlan id 635
/usr/sbin/ip link set dev br0.635 up

/usr/sbin/iptables -I INPUT -i br0.635 -j ACCEPT
/usr/sbin/iptables -I FORWARD -i ppp0 -o br0.635 -s  -j ACCEPT
/usr/sbin/iptables -I FORWARD -i br0.635 -o ppp0 -d  -j ACCEPT
/usr/sbin/iptables -I FORWARD -i ppp0 -o br0.635 -s  -j ACCEPT
/usr/sbin/iptables -I FORWARD -i br0.635 -o ppp0 -d  -j ACCEPT

Unfortunately this doesn't work and when attempting to connect in RRAS it tells me the remote server isn't responding.


Answer 1:

So I’ve managed to figure this out after a lot of digging around, I am able to use the native Azure Site-to-Site VPN functionality with OpenSwan which runs on a linux box (Raspberry Pi/Arch Linux) behind my home network’s NAT router.

Network topology:

  • – Home network
  • – Azure network
  • – Home router’s private IP
  • – Linux box acting as the home network’s VPN server and gatewat

Firstly I set up Azure with:

  • It’s remote network as normal
  • My local network with the VPN address as my public IP
  • Enabled the Site-to-Site checkbox on the Azure network linking it to my local network
  • Created a static gateway so IKEv1 is used

On my home network’s router I forwarded the following to my Linux gateway running Openswan (

  • UDP 500
  • UDP 4500
  • Protocol 50 (GRE)

My ipsec.conf looks like this:

version 2.0

config setup

conn azure
    right=<azure's VPN gateway IP>

ipsec.secrets: <azure vpn gateway> : PSK "Azure's PSK"

That got the link up and running, to allow routing between sites (both ways after a lot of frustration):


net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1

A bash script that runs on boot to keep ‘ipsec verify’ happy:


for vpn in /proc/sys/net/ipv4/conf/*; do
    echo 0 > $vpn/accept_redirects;
    echo 0 > $vpn/send_redirects;

sysctl -p

Finally my iptables rules which took the most amount of fiddling:

filter table, this allows the home network to connect to Azure and allows the connection to be established, but Azure VM’s still aren’t yet able to connect to home network servers:

-A FORWARD -s -m policy --dir in --pol ipsec -j ACCEPT
-A FORWARD -s -m policy --dir out --pol ipsec -j ACCEPT
-A INPUT -p udp -m udp --dport 500 -j ACCEPT
-A INPUT -p udp -m udp --dport 4500 -j ACCEPT
-A INPUT -m policy --dir in --pol ipsec -j ACCEPT
-A INPUT -p esp -j ACCEPT

nat table, this allows the Azure VM’s to connect to any machine on my home network:

-A PREROUTING -i eth0 -p udp -m udp --dport 4500 -j DNAT --to-destination <azure public vpn ip>:4500
-A PREROUTING -i eth0 -p udp -m udp --dport 500 -j DNAT --to-destination <azure public vpn ip>:500

With all this I can ping and communicate in both directions, all Azure VM’s can see my home network, all home network machines can see my Azure VM’s.

The Azure side routes correctly on it’s own, for my home network I set up a static route in my router to send to but for testing on my machine I just created a static route:

route -p ADD MASK METRIC 100

I hope at least someone finds this useful, getting this set up has taken a long time and there isn’t a good complete guide out there just a mixture of solutions which partially work.


Azure Virtual Machine can’t serve websites

Azure Virtual Machine can’t serve websites

I am unable to get an Azure VM running Windows Server 2012 to serve up the IIS default website from it's public static IP.

Created a VM running Win Server 2012 R2 and installed web server role.
Browse to localhost and can see the default website is working
In Azure, configured the VM to have a Public Static IP address and added DNS name to azure which is publicly resolving to the static IP.
In Azure, configured a Security Group for the VM network interface and added the following rules:
allow-http    source: any    source port: 80    dest: any    dest port: 80    service: tcp/80    action: allow
allow-https    source: any    source port: 80    dest: any    dest port: 80    service: tcp/443    action: allow

In Windows Firewall settings, made sure the rules to allow HTTP and HTTPS traffic are enabled. (I have also tried disabling the firewall entirely).
In IIS make sure the default website is bound to any IP address.

When I try to connect to the VM static IP address, e.g.: http://MY.PUBLIC.STATIC.IP, I cant connect at all. I can't PING the server either.
Any ideas on what I am doing wrong?


Answer 1:

You are limiting incoming connection to port 80 only. Client browsers can use a port number ranging from 1024 and 65536 for their outgoing connection.

You need to change your security group settings to allow incoming connections from Any port:

allow-http    source: any    source port: any    dest: any    dest port: 80    service: tcp/80    action: allow
allow-https    source: any    source port: any    dest: any    dest port: 80    service: tcp/443    action: allow

Answer 2:

Instead of specifying “any” against ‘destination port’ in both of your rules, you need to specify port “80” and “443” as destination ports.

With that, when you put http://MY.PUBLIC.STATIC.IP your traffic will reach port 80; and when you put https://MY.PUBLIC.STATIC.IP your traffic will reach 443.

From this Microsoft’s link, check the sub-topic ‘NSG for FrontEnd subnet’.


Azure RecoveryServiceVault can’t be removed?

Azure RecoveryServiceVault can’t be removed?

During the free trial, I spent time fiddling and experimenting with azure. Now that we've moved to paid version, I need to delete all of the experimented things as we don't need all of them.
One of those is a Recovery services vault that somehow got something stuck in it's backup usage (see screenshot below)
The recovery vault as it is now, everything is empty apart from the GRS backup usage
I've looked in all the settings and can find nothing left to remove. Any storage account that may have been linked to the vault is long gone - it's really the only thing that is left in the resource group. I also can't remove the resource group because of this vault.
Any time I try to delete I get following error:

Vault deletion error
Vault 'TestRecoveryServiceVault' cannot be
  deleted as there are existing resources within the vault. Please
  delete any replicated items, registered servers, Hyper-V sites (Used
  for Site Recovery), policy associations for System Center VMM clouds
  (Used for Site Recovery) and then delete the vault.

I even tried the powershell commands
$vault = Get-AzureRmRecoveryServicesVault -Name "TestRecoveryServiceVault"
Remove-AzureRmRecoveryServicesVault -Vault $vault

(same error as above) and
Remove-AzureRmRecoveryServicesVault -Vault $vault -Force

(but this one throws an error that parameter -Force doesn't exist, I suspect outdated documentation)
I'm at my wits end and would really like this vault gone. Any help is appreciated.
There are no tasks left in the vault; only 6MB of data that seems to have come from nowhere, as it didn't get deleted with the tasks. I did not opt to keep backup data when removing tasks.


Answer 1:

Finally was able to remove the vault, after clearing the sql backups from it through powershell. I’m really surprised NO ONE knew about this and it took so much digging to find it.

commands for anyone else having this problem:

These commands are to first see if anything is in the database backups, then remove it all.

$vault = Get-AzureRmRecoveryServicesVault -Name "VaultName"

Set-AzureRmRecoveryServicesVaultContext -Vault $vault


$container = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureSQL -FriendlyName $vault.Name

$item = Get-AzureRmRecoveryServicesBackupItem -Container $container -WorkloadType AzureSQLDatabase

$availableBackups = Get-AzureRmRecoveryServicesBackupRecoveryPoint -Item $item



$containers = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureSQL -FriendlyName $vault.Name

ForEach ($container in $containers)
    $items = Get-AzureRmRecoveryServicesBackupItem -container $container -WorkloadType AzureSQLDatabase

    ForEach ($item in $items)
        Disable-AzureRmRecoveryServicesBackupProtection -item $item -RemoveRecoveryPoints -ea SilentlyContinue

    Unregister-AzureRmRecoveryServicesBackupContainer -Container $container

Remove-AzureRmRecoveryServicesVault -Vault $vault

I hope I helped some other people out there who ran into this mess.

Answer 2:

You need to delete any of the backup tasks in the vault before you can delete it, easiest way is with this PowerShell:

$vaultName = "<vault name>"
$vault = Get-AzureRmRecoveryServicesVault -Name $vaultName
Set-AzureRmRecoveryServicesVaultContext -Vault $vault
$containers = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureVM -Status Registered 
foreach ($container in $containers)
    $backupItems = Get-AzureRmRecoveryServicesBackupItem -Container $container -WorkloadType AzureVM
    foreach ($backupItem in $backupItems)
        Disable-AzureRmRecoveryServicesBackupProtection -Item $backupItem -RemoveRecoveryPoints -Force -Confirm:$false


how do i migrate Azure storage account from classic to ARM

how do i migrate Azure storage account from classic to ARM

i have created a storage account and have a few VM's and blobs. It is supposed to be a classic account.
i want to migrate or convert the storage account to ARM or new version. what is the process to do so?
i have tried to move the contents of one resource manager to a different one, but i didn't get the option to move from classic to ARM



Answer 1:

The whole process of moving ASM to ARM resources can be found here.

Migrate IaaS resources from classic to Azure Resource Manager by using Azure PowerShell

To migrate a Storage Account, all you need is to execute the following PS Cmdlets:

ps:> $storageAccountName = "myStorageAccount"
ps:> Move-AzureStorageAccount -Validate -StorageAccountName $storageAccountName
ps:> Move-AzureStorageAccount -Prepare -StorageAccountName $storageAccountName
ps:> Move-AzureStorageAccount -Commit -StorageAccountName $storageAccountName

If you want to abort the process (before commit), just use:

Move-AzureStorageAccount -Abort -StorageAccountName $storageAccountName

Answer 2:

i want to migrate or convert the storage account to ARM or new

In Azure, we can’t convert the storage account from ASM to ARM, but we can migrate it.

Are you want to move VMs and storage account to ARM module? If yes, we can use follow script to move those:

move VMs to ARM module(this VM create without network, behind cloud service):

Login-AzureRmAccount  #login Azure Account ARM module
Get-AzureRMSubscription | Sort SubscriptionName | Select SubscriptionName
Select-AzureRmSubscription –SubscriptionName "My Azure Subscription"
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
Get-AzureRmResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate

Add-AzureAccount     #login Azure Account ASM module
Get-AzureSubscription | Sort SubscriptionName | Select SubscriptionName
Select-AzureSubscription –SubscriptionName "My Azure Subscription"
Get-AzureService | ft Servicename
$serviceName = "jasonvm333"
$deployment = Get-AzureDeployment -ServiceName $serviceName
$deploymentName = $deployment.DeploymentName

$validate = Move-AzureService -Validate -ServiceName $serviceName -DeploymentName $deploymentName -CreateNewVirtualNetwork

Move-AzureService -Prepare -ServiceName $serviceName -DeploymentName $deploymentName -CreateNewVirtualNetwork
Move-AzureService -Commit -ServiceName $serviceName -DeploymentName $deploymentName

After VMs move complete, then use PowerShell to move Storage Account to ARM module:

$storageAccountName = "jasontest333"
Move-AzureStorageAccount -Prepare -StorageAccountName $storageAccountName
Move-AzureStorageAccount -Commit -StorageAccountName $storageAccountName

More information about move IaaS resources to ARM module, like migrate the VMs to a platform-created virtual network or migrate to an existing virtual network in the Resource Manager deployment model, please refer to this link.

Answer 3:

Indeed @Stevie W is correct, there is an option in the Storage Account (Classic) blade that offers Migrate to ARM. Clicking it offers buttons for Validation, Prepare, and Commit.

Azure Migrate to ARM

Answer 4:

I know this is a late reply, however this thread comes up when searching for migrating classic storage to ARM so thought I’d provide an update.

Since the last comment on here, the Azure Portal has been update to allow running the migration process within the blades. This has worked for a test account, and seems to just apply the 3 powershell commands that have already been noted by Bruno

Azure Storage Blade – “Migrate To ARM” option


Restoring SQL Server backup to SQL Azure on SQL Management Studio

Restoring SQL Server backup to SQL Azure on SQL Management Studio

I am using Microsoft SQL Server Management Studio 2008 R2 to connect my SQL Azure account. I have had no problems connecting to the database.
However, when I tried to backup existing database from my local SQL Server and restore it using SQL Mgmt Studio, I saw a weird situation.
On my local database, when I right-click the database I can see options including but not limited to:

Tasks -> Back up, Restore

however, when I right-click the remote SQL Azure database I connected, I don't see these options and what I see instead are "Extract Data-tier as Application, Register Data-tier as Application".
I want to restore backup binary file that I exported using "Tasks->Back up" from my local database.
Any ideas why Restore and Back up options do not show up on context menu of remote SQL Azure database?
Later on I discovered SQL Azure Migration Wizard on Codeplex.


Answer 1:

You can’t.

Quote from SQL Azure Overview (

For example, you cannot specify the
physical hard drive or file group
where a database or index will reside.
Because the computer file system is
not accessible and all data is
automatically replicated, SQL Server
backup and restore commands are not
to SQL Azure Database.

Answer 2:

Backup and Restore aren’t current supports by SQL Azure. There are various ways to backup data including using BCP, Data Sync Services.

I wrote a small (currently free) tool that creates a backup of a SQL Azure database on a local SQL Server, cunningly called SQL Azure Backup. Really interested in getting feedback on it to make it better.

Answer 3:

Not 100% sure, but I know that Azure is a subset of SQL so they might not allow restoration of a backup in case you’re using functionality and features that are not included in Azure.

You’re discovering the biggest issue (in my mind) of working with SQL Azure, there aren’t “simple” ways to sync between a local database and azure.


Running passive FTP in Azure Virtual Machine with vsftpd-linux

Running passive FTP in Azure Virtual Machine with vsftpd-linux

How to run a passive FTP server on an Azure Linux Virtual Machine?
Configuring the endpoints on Azure firewall and the PASV ports isn't enough because the client hangs on "Entering passive mode"


Answer 1:

Currently, running Passive FTP as smoothly as you would do in a dedicated server isn’t possible because of two reasons: one is that Azure currently allows you to open only 25 endpoints (please correct me if I’m wrong) for each server, and the other is the LAN<->Virtual IP connection that Azure uses. Let’s take the problems one by one.

Azure currently implements a NAT/firewall/load balancer that forwards traffic from an external Virtual IP to an internal network address ( class). If you run ifconfig on your virtual machine you’ll find what I’m talking about. One endpoint is reserved for SSH and I don’t believe you really want to disable it. So if another endpoint is reserved to port 21 you can use only 23 PASV ports (as soon as you don’t host any other service), strictly limiting the number of clients that can connect simultaneously. Once you accept this, let’s go on.

If you opened ports 25003-25006 (one by one) you can use the following configuration to enable them


vsftpd and any other FTP server issues a PASV command that basically says “connect to X.Y.W.Z on port AA”. Any FTP server is supposed to read the machine’s configuration to obtain network address: this is why vsftp basically says “connect to 10.X.Y.Z on port 25003” and, then, why the client hangs!!!

Use the following to tell vsftpd to use a different external address


Tested, worked and shared with the community!

Notes: Active FTP works as soon as the client is not behind a firewall or a Great Wall, and SFTP is the best alternate to FTP, but unfortunately many legacy applications don’t support it.


How do you live-migrate Hyper-V to Azure?

How do you live-migrate Hyper-V to Azure?

I have a new install of Windows Server 2012 with the Hyper-V role setup and a couple VMs running along fat, dumb, and happy. I want to play with Azure hosting for VMs for a couple of stand-alone boxes. Is there anything special that I need to wire up to be able to live-migrate to Azure? I have the 90-day Azure trial account right now. Any special plumbing required? I have not found a lot of documentation about this yet. Everything I found points to manually copying the VHDs via command line and the Azure 2012 SDK.


Answer 1:

There is no path to live migrate a VM to Windows Azure. Moving a VM to Azure requires shutting down the VM.


Can you have a staging and production slot in Azure Websites

Can you have a staging and production slot in Azure Websites

I'm looking at hosting 3 Websites (there will all use the same linked database resource but I think I have to use 3 websites within Azure for this);, and
Using Windows Azure Websites, can you have a Staging, Production slot? I think this feature is only available to Azure Cloud Services but there is little documentation on this. If its not possible, other  than spinning up 3 more sites to act as the staging sites is there another way?
I want the ability to "swap" from staging to production.


Answer 1:

I think you’d have to spin up the extra staging sites. Are you deploying your sites via Git deploy? If so, it’s probably better to have separate staging and production sites anyway. That way, you can make your changes in the staging branch, push them across, and then merge your staging branch into your production branch when you’re ready and push that. What problem are you trying to solve with this approach?

Answer 2:

At the time (Nov 2012), levelnis answer was correct.

The Azure team have recently implemented slots for WebSites however. It looks like the initial implementation is two slots, one for prod and one for staging.

Reading the dev branch of the xplat tools it seems that these might be extended to several slots named as you please. The tools will have commands like azure site create --slot and azure site swap for managing these slots and the relationships between them. I think this will enable some really robust approaches to continuous delivery on Azure Websites.