Get started with Azure Functions

A very quick post today, on the back of Experts Live Europe where I presented a session on using APIs and Azure Functions to develop the DevOps Toolkit of the Future. Azure Functions are one of the best ways to get automation into and around your datacentre. Why? Because you can use them in a full cloud infrastructure, hybrid scenarios, or as standalone automation tools. I recommend following Ryan’s blog at dftai.ch for advanced tips on this topic. I want to get you started on creating your first Azure Function App as a basis for upcoming posts on the subject.

Things you will need:

A Powershell script or module

An Azure subscription

For our EL session, Ryan developed a Spotify API module for Powershell and posted it to GitHub here. The premise was an idea to have more control of Spotify through their open API, for personal use, but I thought we could also use this to highlight what can be done in Azure in terms of tooling and automation. The Spotify module is an example of standalone automation.

So, the idea of this post is to show you how to quickly implement a simple Azure Function, using code you already have, and leveraging the functionality available in Azure.

Let’s get started, open Azure Portal and log in with your Azure subscription. Then click on +New, Search Marketplace for Function App and select this type, then click Create.

 

 

 

 

You need to give your Function App a unique name, then select a hosting plan depending on your subscription. I am using a Consumption Plan for this demo. Also, either create new or use existing storage. The configuration of your app needs to be saved into this storage. I pinned my app to the dashboard so I can find it easily.

 

 

 

 

If it creates successfully, I can open my Function App and see the Functions, Proxies and Slots. So I create a new Function here and I can add my powershell code. I add a HTTPTriggerPowershell function under the custom types.

 

 

 

Now, because I am not using Source Control (Advanced Topic) I can simply edit my powershell code in the directly portal.

 

 

I can also edit the Triggers (HTTP), the inputs (Cosmos DB for instance) and outputs (HTTP) in the Integrate Tab.

 

 

 

You should notice that the variables for the inputs and outputs are automatically inserted into the code.

Now copy the function URL

 

We are ready to test! Let’s create a powershell script on our local machine like this:

$functionurl = “”

$json = ‘{
“method”: “POST”,
“name”: “Andrew”
}’

Invoke-RestMethod -Uri $functionurl -Method POST -Body $json

Run you script and voila, Hello World, or Hello Andrew in my demo, triggered by HTTP POST, running in Azure and returning a result to your local machine. Serverless, hybrid automation.

That’s it. Check out Ryan’s blog for how to work with the Integrate tab (Triggers, Inputs and Outputs). You are ready to start implementing more powerful scripts, check out the Spotify Modules, think about connecting to other APIs, hmm, that sounds like the next post in the series. So, until then: happy serverless computing!
Andrew.

 

 

Advertisement

SCSM: Data Warehouse Troubleshooting Part 2 (Reinstall DW)

If all else fails: Reinstall the Data Warehouse (System Center Service Manager.)

IMPORTANT: Take a backup of SQL DBs before you reinstall DW. In case there are special SQL Objects that haven’t been exported or saved.

Reinstalling the DW is pretty straightforward and done in the main SCSM console. This post is targeted at individuals who know what they are doing with SCSM and DW. Hence I am not going to go through how to remove and reinstall. I though maybe more interesting to know what happens under the Hood.

Before DW is functional, it needs to run a number of (#53) deployment jobs in SQL (only visible in SQL). This has to happen before the DW can be registered in SCSM. (I don’t know, it seems that the DW will regulate this itself, so that if you register the DW directly after install the sync jobs will be queued. I am doing this very granularly)

So, while these deployment jobs are running the DWMaintenance job is stuck in Waiting status. (consequently all other DW Jobs are also “stuck” or “queued”). Notice the DWMaintenance is not just Waiting but also has Errors in the ErrorSummary (this prevents sync jobs from running)

Once these deployment jobs are finished, DW should create a new Batch job for DWMaintenance which will be released from Waiting and should then run ok.

Step-by-step

Immediately after installation has completed, SQL will look like this:

SCSM_DW_PT2_1

 

 

Query:

select WI.BatchId, WI.StatusId, WI.ErrorSummary from infra.WorkItem(nolock) WI

join infra.Batch BAT on WI.BatchId = BAT.BatchId

join infra.Process PRO on BAT.ProcessId = PRO.ProcessId

where PRO.ProcessName = ‘DWMaintenance’

select * from DeploySequenceView where DeploymentStatusId != 6

select * from DeploySequenceStaging

 

Until all the jobs in DeploySequenceView disappear. Ca 1 ½ hours

SQL will then look like this:

SCSM_DW_PT2_2

 

 

 

With a new Batch job for DWMaintenance.

In Powershell we can now see the DW_EXTRACT, Transform and Load jobs. Ready for registration.

You can check the Deployment with: select * from DeployItemView

All Items should be completed.

The Extract, Load and Transform jobs all then run, even though the DW is not registered it is an initialization of the jobs. They don’t take long to run through.

Now we can try to register the DW.

After successful Registration the Extract Job will run. All jobs can now be seen in Powershell via the Warehouse Cmdlets.

Start (Resume) the MPSyncJob if you can’t wait.

The jobs run in the following order (sort of):

 

MPSyncJob/Disable Deployment Jobs

MPSyncJob/Synchronize ServiceManger MPs

MPSyncJob/Create ServiceManagerExtracts

Extract_DW_/*

MPSyncJob/Associate Imported MP Vertex

 

Wait for a long time until all these jobs finish……. probably overnight.

SCSM: Data Warehouse Troubleshooting Part 1 (Jobs Fail on Missing Primary Keys)

Symptoms: The Load.Common or Transform.Common Jobs are failing in SCSM DW (Service Manager)

To find out why run this query against DWDataMart:

select WI.WorkItemId,WI.BatchId, WI.StatusId, WI.ErrorSummary from infra.WorkItem(nolock) WI where WI.ErrorSummary is not null

If you see Errors referring to missing Primary keys, like this example:

Message: UNION ALL view ‘[dbo].PowerActivityDayFactvw’ is not updatable because a primary key was not found on table ‘[dbo].[PowerActivityDayFact_2013_Jun]’.

Then you need to either rebuild the DW or re-create these Primary Keys. I have no idea why tables suddenly lose their Primary Key. This Problem however usually affects Relationship Tables and Views (Facts). With a few exceptions the Primary Key is composed of the first 3x Columns (according to Ordinal). These columns are usually the DimKey, the related item DimKey and the DateKey however if it is a “Duration” or “measure” relationship then the third column will be something like a StartDate or TimeKey. In this case you Need the first 3x columns and then the DateKey, making 4x columns in total to create the Primary key. This eventuality is covered in the script. What is not covered is the EntityManagedType and EntityRelatesToEntity relationship tables which have extra columns in the Primary Key. Also the SLAInstanceInformation relationship table has a Special Primary Key. These exceptions must be dealt with separately.

Happily though, for everything else there’s a script:

<#
Fix Data Warehouse Primary Keys Issue
#>
$sqlConnection = New-Object System.Data.SqlClient.SqlConnection
$sQLServer=’SCSMServer’
$sQLDBName = ‘DWDataMart’
$sQLStagingDBName = “DWStagingAndConfig”
function SQLCommand($sQLCommand,$sQLDB){
$sqlConnection.ConnectionString = “Server = $sQLServer; Database = $sQLDB;Integrated Security = True”
$sqlConnection.Open()
$sQLCmd = New-Object System.Data.SqlClient.SqlCommand
$sQLCmd.CommandText = $sQLCommand
$sQLCmd.Connection = $sqlConnection
$sQLCmd.ExecuteNonQuery()
$sqlConnection.Close()
}
function QueryTable($sQLQuery,$sQLDB){
$sqlConnection.ConnectionString = “Server = $sQLServer; Database = $sQLDB;Integrated Security = True”
$sqlConnection.Open()
$sQLCmd = New-Object System.Data.SqlClient.SqlCommand
$sQLCmd.CommandText = $sQLQuery
$sQLCmd.Connection = $sqlConnection
$sQLAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$sQLAdapter.SelectCommand = $sQLCmd
$dataSet = New-Object System.Data.DataSet
$sQLAdapter.Fill($dataSet)
$sqlConnection.Close()
return $dataSet
}
cls
$allMay2014FactTables = QueryTable “select TABLE_NAME from INFORMATION_SCHEMA.TABLES where TABLE_NAME like ‘%Fact_2014_Jun%'” $sQLDBName
foreach($table in $allMay2014FactTables[1].Tables[0]){
$tableName = $table.TABLE_NAME
$priKeyExists = QueryTable “SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE
OBJECTPROPERTY(OBJECT_ID(CONSTRAINT_NAME), ‘IsPrimaryKey’) = 1
AND
TABLE_NAME = ‘$tableName'” $sQLDBName
if($priKeyExists[1].Tables[0] -ne $null){
“Primary Key Exists in $tableName”
}else{
“Primary Key MISSING: $tableName”
$columns = QueryTable “select COLUMN_NAME,ORDINAL_POSITION
from INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = ‘$tableName’
ORDER BY ORDINAL_POSITION” $sQLDBName
$priKey1 = $columns[1].Tables[0].Rows[0].COLUMN_NAME
$priKey2 = $columns[1].Tables[0].Rows[1].COLUMN_NAME
$priKey3 = $columns[1].Tables[0].Rows[2].COLUMN_NAME
if($priKey3 -ne ‘DateKey’){
$alterTableCMD = “ALTER TABLE [dbo].[$tableName] ADD  CONSTRAINT [PK_$tableName] PRIMARY KEY NONCLUSTERED
(
[$priKey1] ASC,
[$priKey2] ASC,
[$priKey3] ASC,
[DateKey] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [FileGroup2_Facts1]”
}else{
$alterTableCMD = “ALTER TABLE [dbo].[$tableName] ADD  CONSTRAINT [PK_$tableName] PRIMARY KEY NONCLUSTERED
(
[$priKey1] ASC,
[$priKey2] ASC,
[$priKey3] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [FileGroup2_Facts1]”
}
SQLCommand $alterTableCMD $sQLDBName | out-null
}
}
SQLCommand “update infra.WorkItem set ErrorSummary = NULL,StatusId=3 where ErrorSummary is not null” $sQLStagingDBName | out-null

After running this script, try resuming the Load.Common Job and check for Errors. I recommend using Mihai’s script just to clean everything up:

http://blogs.technet.com/b/mihai/archive/2013/07/03/resetting-and-running-the-service-manager-data-warehouse-jobs-separately.aspx

EDIT: MS already have a SQL Script which will do the same thing.. 🙂

https://technet.microsoft.com/en-us/library/dn299381.aspx

Quick Tip: Complex Powershell in Run Commandline Step of ConfigMgr TS

I am using this more and more now, maybe just because i think it’s kinda cool. I wanted to use some Powershell in a Run Commandline Step in a TS, which used a couple of lines of code, had some double-quote characters, and i didn’t want (or was too lazy) to create a script in a package, update the DPs and use that. I just wanted to test the script, demo what it did, before i went down the package route.

Also, using double-quotes in the Commandline step annoys me , escape characters and so on; so i found this neat little trick.

Take the code you want to use, e.g.

$tsenv=new-object microsoft.sms.tsenvironment;$tsenv.Value(“SMSTSErrorDialogTimeout”)=0

and parse this to a string variable using something like a Here-String

$script = {$tsenv=new-object microsoft.sms.tsenvironment;$tsenv.Value(“SMSTSErrorDialogTimeout”)}.ToString()

then use the System Convert and Text Encoding classes to create a Base64String

$encCmd = [System.Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes($script))

which gives

JAB0AHMAZQBuAHYAPQBuAGUAdwAtAG8AYgBqAGUAYwB0ACAAbQBpAGMAcgBvAHMAbwBmAHQALgBzAG0AcwAuAHQAcwBlAG4AdgBpAHIAbwBuAG0AZQBuAHQAOwAkAHQAcwBlAG4AdgAuAFYAYQBsAHUAZQAoACIAUwBNAFMAVABTAEUAcgByAG8AcgBEAGkAYQBsAG8AZwBUAGkAbQBlA
G8AdQB0ACIAKQA=

Now use that with Powershell in a Commandline step like this

powershell.exe -EncodedCommand JAB0AHMAZQBuAHYAPQBuAGUAdwAtAG8AYgBqAGUAYwB0ACAAbQBpAGMAcgBvAHMAbwBmAHQALgBzAG0AcwAuAHQAcwBlAG4AdgBpAHIAbwBuAG0AZQBuAHQAOwAkAHQAcwBlAG4AdgAuAFYAYQBsAHUAZQAoACIAUwBNAFMAVABTAEUAcgByAG8AcgBEAGkAYQBsAG8AZwBUAGkAbQBlAG8AdQB0ACIAKQA=

 

Watch for line breaks when you copy the Base64String. Otherwise, it works, and if you find it useful or just cool then i’m glad.

Incidentally, to Change the Base64String back to readable text:

[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($encCmd))

Accessing the Task Sequence Environment in ConfigMgr

I had a question at CMCE Switzerland about how to get acces to the SMSTS variables through Powerhell, so here it is:

In either a script or at a Powershell command window, create a ComObject:

$tsenv = New-Object -ComObject Microsoft.SMS.TSEnvironment

you can use this to access and set Task Sequence variables:

$tsenv.Value(“OSDComputerName”)

will return the value of the OSDComputerName action variable

$tsenv.Value(“SMSTSErrorDialogTimeout”) = 0

sets the timeout on the error message box to something like 6 years (in seconds)

Policypv Unknown SQL Error

Came across this nice little nugget today. The site seemed generally in order, mostly green, but the Policy Provider was in a Warning state. Looking in the status messages I found hundreds of Unknown SQL Errors. The policypv.log didn’t shed much more light on the supposed problem. Probably had the site been in a more intensive operational state someone would have noticed that they couldn’t add new Software Update packages, for instance.
The problem itself is a common one in the SQL Server world, stored procedures cannot be remotely executed. In this case the sp_updpolicyresmap.
And the most common cause is a SQL DBA or a Server Admin being proactive and improving the server by adding or reconfiguring new disks. To do this they detach the databases, make the changes and reattach. What sometimes happens though, is that for reasons unknown to me the Trustworthy state of the database reverts to false and the sa loses ownership.
Easy to fix though:
USE [databasename]
GO
EXECUTE sp_changedbowner ‘sa’

ALTER DATABASE [databasename]
SET TRUSTWORTHY ON

BitLocker in SCCM with 2nd HDD – NEW and REFRESH Scenarios

There are a couple of challenges when using BitLocker in ConfigMgr 2012. Using Pre-Provisioning and locking a 2nd HDD in REFRESH Scenarios is one such challenge.

Here is how i handle it.

Scenario 1: NEW – Single Disk

Background and overview: New PC or Laptop, single hard drive.

1. Create a PreInstall partition on the disk if there is no available partitions

2. Configure BIOS and TPM (see previous post)

3. Format and Partition Disk0 for use with BitLocker. Create a BDE partition with fixed size 500Mb, NTFS and store the drive letter as a variable BOOTPART. Create a System partition of 100% remaining space, NTFS, store drive letter as OSPART.

4. Pre-Provision BitLocker to “Logical drive letter stored in a a variable” – OSPART

5. At the end of the Task Sequence Enable BitLocker on “Current operating system drive”. Choose to wait for BitLocker to complete before continuing.

Scenario 2: REFRESH – Single Disk

Background and overview: Refresh PC or Laptop, single hard drive.

1. When started from Software Center, disable BitLocker on current operating system drive and reboot to WinPE.

2. If started from USB or PXE, use a script to unlock the operating system drive.

3. continue from step 2, Scenario 1.

Scenario 3: NEW – Additional Disk

Background and overview: New PC or Laptop, multiple hard drives.

1. Out of the box means started from USB or PXE, use a script to unlock OS and data drives.

2. continue from step 2, Scenario 1.

3. When finalizing BitLocker on the OS disk choose to continue on error. This because the attributes will be inconsistent after C, D and E drive letters are reassigned but the BitLocker process will finalize ok.

4. Enable BitLocker on the additional drive, choose either to wait for BitLocker to finish or continue and allow the drive to encrypt in the background. The machine will be usable but the 2nd disk will have limited availability until the process is finished which could be 20 – 50 minutes.

Scenario 4: REFRESH – Additional Disk

Background and overview: Refresh PC or Laptop, multiple hard drives.

1. When started from Software Center, disable BitLocker on current operating system drive and data drives and reboot to WinPE.

2. If started from USB or PXE, use a script to unlock the operating system drive and data drives.

3. continue from step 2, Scenario 1.

3. continue from step 2, Scenario 3.

Here are some screenshots of the TS:

BitlockerTS_1BitlockerTS_2BitlockerTS_3

Content download problems with large packages in ConfigMgr 2012

The customer was trying to install an AutoCAD type application via Configuration Manager 2012 (SP1 CU2) and the installation kept hanging in Software Center with the message “downloading data”.

The background is that this application had previously worked ok, but since then the source files have been altered and updated on the DPs. The source data is itself around a whopping 9Gb.

So let’s go through the logs.

AppDiscovery, AppEnforce and AppIntentEval all look normal. Application is not detected and will be installed.

Execmgr makes a call to CAS requesting content. So far so good.

CAS requests content locations and receives 3 valid DPs.

CAS submits a CTM (Content Transfer Manager) job, and receives confirmation of the locations and submits a DTS (Data Transfer Service) job.

Now the DTS starts the download and immediately hits a problem.

GetDirectoryList_HTTP(‘http://dp:80/SMS_DP_SMSPKG$/Content_45106a31-c15b-4d29-ba68-97e1b97a5e9e.1&#8217;) failed with code 0x80004005.

Error retrieving manifest (0x80004005).  Will attempt retry 1 in 30 seconds.

Sure enough it retries only to get:

Non-recoverable error retrieving manifest (0x80004005).

And this happens in turn for each available DP. Unfortunately Software Center will just sit there doing nothing. This last point may be because the installation is running in a task sequence but let’s not dwell on that for now.

So obviously a problem with the content on the DPs. Agreed?

Redistribute application to DPs – same error

Remove application from all Dps and redistribute – same error

Zip the source from 9gb down to 4Gb, remove from DPs and redistribute – same error, but wait, when i redistributed to the DPs it still took a long time, not half the time as i expected.

I had to get the Content ID from the DTS log and then look at the ContentLibrary on the DP. Found the content but it didn’t match the source. In fact it didn’t match any updated source, seemed to be the original content from before the first update. Very strange.

So to look at how distmgr and ContentLibrary work in more detail:

Source is copied to the “primary site server” and stored in a ContentLibrary there, even though there is no DP on this server. This is a small throwback to SCCM 2007 and is unavoidable. Anyway, from there the content is copied to the DPs and stored in ContentLibrary. Removing an application from the DP goes quite quickly in the console but looking at the  distmgr log and smsdpprov log on the DP you can see that the data itself is not fully removed for some time after, depending on the size of the content. If you redistribute the content before it is fully removed from ContentLibrary then distmgr will skip copying the files that already exist. Also, distmgr will copy the content from the ContentLibrary on the primary server, if it exists there, rather than directly from the source. So in actual fact, with a 9Gb application the source was never correctly updated on the DPs – it’s very difficult to say exactly which files didn’t match the hash values in the manifest but obviously we need to completely this application’s content from the DPs, from the primary server ContentLibrary and wait until everything is completely gone before redistributing.

Easier said than done. To remove from the DPs is not so difficult, just as with any normal application. But make sure that distmgr and smsdpprov confirm it is gone, AND check manually in ContentLibrary in DataLib under the ContentID to make sure it is physically gone. For 9Gb this can take about 30 minutes.

Then have a look at the ContentLib on the primary server. It hasn’t detected that it needs to update from the source so you need to trigger that. Here is what i did although maybe there is a better way, i had run out of patience by this point so i didn’t wait to see if it would also remove itself.

I removed all previous revisions of the application from revision history. Then changed the source path on the deployment type to point to an empty folder. Now monitoring distmgr i can see he has spotted this and creates a new content instance in the library with a new ContentID. The old ContentID however remains and is flagged as orphaned. Check ContentLibrary for the physical presence and this query against the DB:

select * from OrphanedContents

After a maximum of 60 minutes, the content cleanup cycle will run on the server and remove these orphaned contents. That’s about how long it takes to find out that there is very little documentation on the content cleanup task on the internet, and very little help to be found searching for “delete orphaned content sccm 2012” in Bing or Google or whatever… It helps pass the time though.

So now we have a large application with an empty source path, and not distributed to DPs. And checking that we can see that there are absolutely no more traces on DPs, in ContentLibraries, or in the DB.

Now set the source back to the correct source path on the deployment type. And wait until this is completely updated on the primary site in ContentLibrary. I didn’t wait this time for the cleanup task but i checked back later and references to the empty source where gone.

Check the ContentLibrary on the primary server, you need the new ContentID from distmgr.log, to see if the new content is physically there. Once it is distribute to DPs. Again check distmgr, also PkgXferMgr.log, on the primary server and smsdpprov.log on a DP until they are finished processing the content. Check the content is physically in the ContentLibrary in DataLib under the ContentID.

Now try installing the application again on the client. This time it installs no problem.

A couple of things to point out here:

This deployment is an application delivered via task sequence but it applies to pure AppModel deployments as well.

We have a primary server which serves as SMSProvider and the DPs are all only on separate boxes.

The problem is here when you have a very large amount of data in the source and you update through the console quicker than it actually takes for the processes to finish. On smaller content this may be ok but the larger the packet the more chance that something will get skipped in the file copy process.