I now have several add-ins in AppSource and all doing well. But one in particular is getting a LOT more attention than others: “Send to Planner.” While popular with the general public, IT admins are not because it needs so many permissions. I wrote a fairly complex explanation to why it needs to many permissions: https://kryl.com/?page=kb&id=31. Ok, so some say – fine, I trust that, but others want more, a lot more, like my latest Penetration Testing Results. They also ask for more than attestation and wonder why I am not on the Office 365 certified list… me too, it turns out…
Sidebar first… This is leading somewhere, I promise… So, Office 365 Certification… I jump through God knows how many hoops to try to get my Office 365 certification rather than just a generic Publisher Attestation. I have to provide a TON of documentation and one of the artifacts is a good penetration test which are actually a LOT of work. Anyway, after a few weeks of gathering information, study on how to do this, that or the other thing, a ton of back-and-forth in email… I was told…
“Well, so, yeah… Your company is… yeah… just not big enough for us to consider you.”
So, when I submitted the VERY FIRST FORM with employee count you still let me proceed! I waste weeks, collecting and finagling and answering to artifacts and this… Ugh! Anyway, it was an educational experience and from that, I learned how to create a good Pen Test document.
…Back to the main attraction. Turns out to make administrators HAPPY, you need to have a really good Pen Test and one of the picker areas they remark upon is your sites Content Security Policy. I use an NGINX server in Azure. It took a LOT of finagling to get it restrictive enough for me to score an A+ on https://securityheaders.com:
In my QA testing, everything seems to be humming along with my A+ except that I started noticing my add-in failing to load sometimes. Pop open the F12 developer tools and viola, a MicrosoftAjax.js error that is failed to load because of my CSP. So, I added it:
That seemed to make it happy, and my QA succeed so I published. I had figured it must be something in my code, but I did not find Ajax in any of my dependencies or my code base, so I gathered it was from office-js (rightly so), but no big deal, right. I did an NPM UPDATE and grabbed another coffee.
But then users started to contact me that they were unable to use the add-in. Why? Well, turns out in certain conditions my code did something that would fail. And after a lot of troubleshooting, I start to see this error around areas where I am getting mailbox session data:
Cannot read properties of undefined (reading ‘cannotDeserializeInvalidJson’) at Sys.Serialization.JavaScriptSerializer.deserialize
So, oh boy… 2015 wet sock moment… the Microsoft Office JS library apparently injects the MicrosoftAjax.js library right into your taskpane page and uses it for JSON.parse(). What?
Send to Planner (and my other add-ins) were all showing this. And I had NEVER seen this Ajax error before — so it was new. I scoured the web, asked my new buddy Chat GPT and… long story short, because this was something new… What did I do… Pen Testing… Yes, and what did I recently change… Oh, yeah, my CSP. Yes… more digging, trial and error:
Well, now I only have an A on security headers, and I get a nice little blurb about:
This policy contains ‘unsafe-inline’ which is dangerous in the script-src directive. This policy contains ‘unsafe-eval’ which is dangerous in the script-src directive.
Inline is one thing and good use of input sanitization with DomPurify (thanks MichaelZ), corner cases that. But unsafe-eval? I confirmed – on/off/on/off – yep… it needs to be there.
Anyway, it was a long journey – many hours – and I am wiser for it. I know this is probably because OfficeJS still supports versions back to Office 2013 and Edge before it went the path of the Chromium, but I hope maybe it can move away from needing eval() code and the MicrosoftAjax.js library in the near future and I can add a (+) back to my security posture.
Hopefully, this post will help others from tripping up on this as they lock things down.
I am posting this here as it is a vexing issue, and I have received no feedback or guidance on it.
If you use Power Automate with the Planner Graph API or you have developed a solution that used the Planner Graph API to create tasks you will be affected by this problem:
Essentially, the issue is that the new PREMIUM Plans that you can create from Teams Planner App (and soon the updated Planner Website), are not fully supported in the Graph API.
When you try to POST (create) a task for example you get a 403 error. This would imply that you need some type of Graph API Permission in your Entra ID Registration, but there is NO DOCUMENTATION on this at all.
If you or your organization use Microsoft Planner and have recently upgrade to Premium plans, and you automate your plans, please UP VOTE the item reference above.
If you are using Power Automate, here is a simple one I created. If a task is created and assigned to you anywhere in Planner, it will create a new task for you on a specified plan in a specified bucket.
If I run this, it creates a new task on the Plan: “Starts of Basic”
If I upgrade the plan to PREMIUM and then go to any other plan and create a task, and the automation runs, this is what I get:
This is the error you get:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I have recently started working on a new add-in in Office and had reason to create an newly uploaded attachment to a Planner Task item.
OMG!
I pulled out all my hair on this one as I was unable to figure out from the documentation how to do this. Hack, hack, hack — 3days later… well, I hope to save the world this frustration.
Here are the steps:
First, get the file from an upload. Determine the type and convert all the binary in the file to a BASE64 string.
Next, you need to GET the planners GROUP by the planner ID: https://graph.microsoft.com/v1.0/planner/plans/${plannerId}
Then you create the item in the group drive container, where name is the name of the file via a POST (with the based 64 in the body, content-type: text/plain): https://graph.microsoft.com/v1.0/groups/${plan.container.containerId}/drive/items/root:/${name}:/content
This returns a driveItem object and from this you need the driveItem.webUrl, but this is the part that killed me. The documentation tells you to use an encoded URL. WRONG. This fails every single time. After a lot of looking over the documentation and seeing what they were submitting, you actually only replace the “.” and the “:” and white space / /. Here is a function I created in JS:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Next, GET the task item by ID: https://graph.microsoft.com/v1.0/planner/tasks/${id}
Then GET the task items details by ID: https://graph.microsoft.com/v1.0/planner/tasks/${id}/details
For this next part, you use the “@odata.etag” prop the details and you will create a new entry with a PATCH: https://graph.microsoft.com/v1.0/planner/tasks/${taskId}/details.
For this part you need to create a fetch body like this: { references: ${ref} }, where the ref is defined as:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To make the above call you have to supply the @odata.etag. And this is the NEXT part that messed with my noodle. You do NOT use the whole value returned AND you have to place it in double quotes. Ugh! So here is more code I wrote to help with that:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Then when you make the call you place the tag in the headers: If-Match. And do not forget to add the Content-type, and to see the returned result the Prefer as well:
The final submission looks like this:
And let me stress this. You must PATCH this, not “patch.” Another thing I found is that GET/get, PUT/put, DELETE/delete, and POST/post all work interchangeably. But if you “patch” you will get some bazaar error about CORS and that path is not supported. And if you hit the service with OPTIONS you will see all the supported methods are returned in CAPS. Oddly, all work with lowercase or upper case, except for PATCH.
I recently got a question about this add-in created as a sample by the Microsoft Office Developer Team. It is provided as a sample and is a challenging bit of code to follow. It all started because I was trying to provide some advice to someone who is new to the OfficeJS world (but not Office development in general). They wanted to build a cross-platform add-in for a cause. After tinkering with it for a few hours, I managed to get it to work. But then I looked it over and it was quite a bit of tinkering on my part.
In most cases I create Add-ins based on React, but because I have done a lot of Trello development of late, I have gotten into more of a pure ES6 vibe. The code provided in the add-in as I stated was hard to follow because I think I am losing my jQuery and ES5 JavaScript skills.
Anyway, as I said, I spent a few hours today and worked on getting it into ES6 format by moving most of the code into classes, removing redundant code, removing redundant HTML pages, consolidating CSS and adding a lot of additional JSDoc comments throughout the code.
NOTE: This looks and behaves exactly the same, but I have not thoroughly tested it. So, it might be a bit rough here and there. If you encounter issues please let me know. 😁
I have really enjoyed writing this bit of code because it stretched from Trello API to Office 365 API, two of my favorite programming interfaces. This Power-Up is similar to the default Trello Calendar Power-up, the key difference being is that it connects to your Outlook Calendar. So, you can see all your Trello Tasks and your Outlook appointments/meetings, side by side in one place, you can link your appointments to Trello cards and vice-versa. With a month view and a weekly view, you can manage your calendar easily by dragging and dropping your Trello Cards on the calendar to create linked appointments for specific tasks all in one place.
In writing my new Outlook Add-in (Send to Trello), I got stuck on attachments. The first version did not include an attachments option because of two unique problems that compounded each other:
The Office Add-in API no longer provides a Media Type/MIME Type with an attachment request. I am able to get the “blob()” from Office, but other than the file extension there is not a way to determine the type. But sometimes a file does not have an extension, or the extension is wrong, etc.
The Trello API will not let you upload without supplying a MIME type, you cannot just give it a Base64 string as an attachment and let them figure it out.
So, I found out something interesting while researching a workaround. Most every base64 string of a specific file type starts with the same “prolog” of text. Using this, combined with the fallback of the file extensions, I was able to get attachments to work (for the attachment types supported by Trello). So, v1.02 will now include attachments.
Anway, as for the workaround I found, this might be ugly, but wanted to share it anyway:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
You know, sometimes the way you want to do things is not necessarily the right way. And even then the right way makes life so hard you want to cry. With Office.JS I recently blogged about “a way” you can test multiple sprints of your add-in in your development environment: Testing a Web Add-in with Multiple Manifest. In this article I explain a method of creating multiple manifests to point to each “version or sprint” of your add-in. Recently, I found that one of my customers threw a wrinkle into that idea. Not that creating multiple manifests was impossible, but they were needing two conflating factors:
Each sprint points to a different folder of HTML/JS.
And they needed to use query string parameters (like this post) to load one of maybe 20 different configuration settings.
With up to 20 configurations per sprint and lets say 4 sprints, you end up with 80 manifest files. So, this is untenable. You need more choices, then enter necessity…
So, lets say you need to test multiple configurations and multiple sprints. Here is how you will set this up:
You will publish each sprint of your Web Site Project into a different folder in IIS. Like this:
Different sprints in separate folders
Then, you will create a new HTML page. I called mine Launcher.html. This page will contain a submit button and two drop-down select controls:
One for sprints
One for configs
Next, you will modify your manifest. You can add a new button called “Sprint Testing” for example and have it launch: Launcher.html.
In the Launcher.JS, populate those dropdowns. You can populate these hard coded into the HTML, from your client side JS or call your service controller. In my example, I call my service controller.
When the user clicks the submit button after choosing the desired settings, you will build the URL to the proper sprint folder and then append the config values to the current query string (that last item is critical so that your sprint Compose/Read page will get the important Office related query string values passed to it). You will see I do this below by replacing my “launcher.html”, replacing it will the new path, then append the config setting:
Once done your code will perform a location.replace() on the url and you will have your proper sprint and configuration loaded. You will only have a single manifest file and you will be able to test/regress test as much as needed with different sprints and configurations.
Here is what the HTML page I created looks like when loaded:
Here is the HTML:
[code lang=”javascript” collapse=”true” title=”click to expand if the github.com embedding below is not visible.”]
<!DOCTYPE html>
<html>
<head>
<title>Select Which Sprint to Launch</title>
<meta charset="utf-8" />
<script src="https://appsforoffice.microsoft.com/lib/1/hosted/office.js" type="text/javascript"></script>
<script src="Scripts/jquery-3.1.1.js" type="text/javascript"></script>
<script src="launcher.js" type="text/javascript"></script>
</head>
<body>
<h3>Please select the sprint you wish to launch:</h3>
<select id="selectSprintList">
</select>
<h3>Please select the configuration:</h3>
<select id="selectConfigList">
</select>
<button id="launchSprintButton">Submit</button>
</body>
</html>
[/code]
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
[code lang=”javascript” collapse=”true” title=”click to expand if the github.com embedding below is not visible.”]
/// <reference path="Scripts/jquery-3.1.1.js" />
(function () {
‘use strict’;
Office.initialize = function (reason) {
$(document).ready(function (reason) {
/// Upon load connect to the server and request
/// the sprints so that we can fill the select
/// field on the form.
makeAjaxCall("GetSprints", null, function (response) {
/** @type {String} */
var data = response.Message.toString();
/** @type {String[]} */
var results = data.split(",");
$.each(results, function (index, value) {
$("#selectSprintList").append("
<option>" + value + "</option>");
})
}, function (error) {
$(document).append("
" + error + "
");
});
makeAjaxCall("GetConfigs", null, function (response) {
/** @type {String} */
var data = response.Message.toString();
/** @type {String[]} */
var results = data.split(",");
$.each(results, function (index, value) {
$("#selectConfigList").append("
<option>" + value + "</option>");
})
});
$("#launchSprintButton").click(function (event) {
/// The user clicked the submit button. Build the URL
/// from the selection in the select control and
/// change the location
var sprint = $("#selectSprintList").val();
var config = $("#selectConfigList").val();
var url = "";
if (sprint != null && sprint != "" &&
config != null && sprint != "") {
// replace the launcher page with the proper sprint folder location,
// but be sure to keep all the query string values – to pass on
// to our OfficeJS page. However, add our one config setting to the end
url = window.location.href.replace("launcher.html", sprint + "/ComposeMessage.html");
url += "&Config=" + config;
// replace the url
location.replace(url);
}
});
});
}
})();
// Helper function to call the web service controller
makeAjaxCall = function (command, params, callback, error) {
var dataToPassToService = {
Command: command,
Params: params
};
$.ajax({
url: ‘api/Default’,
type: ‘POST’,
data: JSON.stringify(dataToPassToService),
contentType: ‘application/json;charset=utf-8’,
headers: { ‘Access-Control-Allow-Origin’: ‘*’ },
crossDomain: true
}).done(function (data) {
callback(data);
}).fail(function (status) {
error(status.statusText);
})
};
[/code]
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
And here is the code for the controller, now, I cheated and just returned a string value for each call, but this at least help you get the idea of how to build your own list for sprints and configs:
[code lang=”javascript” collapse=”true” title=”click to expand if the github.com embedding below is not visible.”]
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
namespace OutlookLauncherDemoWeb.Controllers
{
public class DefaultController : ApiController
{
///
<summary>
/// Service Request – incoming
/// </summary>
public class WebServiceRequest
{
public string Command { get; set; }
public string[] Params { get; set; }
}
///
<summary>
/// Service Response – outgoing
/// </summary>
public class WebServiceResponse
{
public string Status { get; set; }
public string Message { get; set; }
}
[HttpPost()]
public WebServiceResponse Values(WebServiceRequest request)
{
WebServiceResponse response = null;
switch (request.Command)
{
case "GetSprints":
return getSprints();
case "GetConfigs":
return getConfigs();
}
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
The idea here is that this add-in is a baseline for yours. You can build these components into a debug build of your add-in and it will call into each separate folder/sprint where you have posted previous versions.
There is a common problem I see when automating Excel from an external .NET application:
You will launch Excel, open a workbook, perform work, and then try to close Excel and find that it remains in memory. You might even have code in your solution to try to re-use existing instance of Excel and find that it uses one of these “zombie” instances that do not have any of the add-ins loaded… and maybe an add-in your code relies on.
The issue is that Excel is not able to close because the primary thread of your application is holding on to a COM reference. Even adding Marshal.ComReleaseObject() and GC.Collect() like this probably will not improve the situation either:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
From my experience, and from what I have gathered is that until the thread is stopped, the COM reference will be locked. Not sure where the issue lies (EXCEL, COM, or .NET), but I have found a workaround to this. What you have to do is terminate your thread. But because you do not want to terminate your application, you need to create a NEW thread. So the following pattern is what I have found works:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
As I start to delve deeper into the world of Office.JS and JavaScript, I am having some fits with how things are done and how things are implemented. Besides getting used to a whole new load of terms, finding out that there are more frameworks than there are developers (ok, maybe that is a small exaggeration), or that JavaScript is not “really” object-oriented (and that’s “ok”) is just nearly too much to handle.
Then there is one of my biggest complains with JavaScript is it use of types. You can get in a lot of trouble here and you can even break JavaScript from its root if you really prototype something wrong (see monkey patching). Additionally, scope can be an issue. And then there is Visual Studio 2015 doing it’s best to try to help you with types, but then it fails you. For example, you type “something(dot)” and you expect IntelliSense to come to the rescue – and low and behold, you get what I like to call: “The yellow bangs of hell.“(tm)
I have been coding with this problem thinking I am all alone in this world screaming every time an object, string, or number fails at IntelliSense. Then on an internal Microsoft email thread where the virtues of TypeScript and JavaScript were being contemplated, a whole world was opened before my eyes. Enter JSDoc. Turns out there is an entire specification around this. And even better – Visual Studio 2015 support it. And it is pretty easy to use. Plus type definition is just the tip of the iceberg, there is a LOT more for me to lesrn. But for now, here is an example of how to define a few common types I frequently use:
[code language=”javascript”]
/** @type {XMLDocument} */
var xmlDoc = $.parseXML(ewsResult.value);
/** @type {string} */
var theString;
/** @type {number} */
var myNumber;
[/code]
I have started to use this very recently in my proof of concepts I work on with my customers and I think I just increased my productivity by 50% (or more). Holy cow.