Wrapped up a basic dotfiles repo, with a smidgen of vim plugin-ness and a little OS X config. Still want to get a dotsecrets repo or a subbfoler using git-secret, but the very most basic pieces are in place. #
A couple of useful little OS X bits that I learned while doing this; Fn+Left-Arrow and Fn+Right-Arrow are Home and End (of line) shortcuts when using the default keyboard. Very useful when using vim in iTerm2, where Cmd+[LR]-Arrow change tabs. Also was pointed to this which prompted me to map Caps Lock to Escape (also useful in vim) #
I think I need to look into a terraform vim plugin. #
Trying to get a CKEditor Markdown plugin working in Manuscript. Using this one which looks pretty good in the demo. I can reload the event editor’s CKEditor but can’t get the plugin to load via
var ed = CKEDITOR.instances.sEvent.config
CKEDITOR.instances.sEvent.destroy()
CKEDITOR.plugins.addExternal('markdown', 'https://rawgit.com/hectorguo/CKEditor-Markdown-Plugin/master/markdown/plugin.js')
ed.extraPlugins = "FBContextMenu,FBLink,FBFormSubmit,FBSnippets,FBInsertImage,FBCodeSnippets,markdown"
CKEDITOR.replace('sEvent', ed)
I can destroy and reload the Manuscript default CKEditor plugin to reload with that code, but the Markdown plugin isn’t available. Also tried CKEDITOR.plugins.add('markdown', 'https://rawgit.com/hectorguo/CKEditor-Markdown-Plugin/master/markdown/plugin.js') instead of addExternal() although I think the external version is the right API to use. #
I’m referencing this Stack Overflow question and answers and also pieces of this article. #
#1
At SQL Saturday Madison today. #
Azure SQL Managed Instances seem like an interesting alternative to Azure SQL Database. Lift-and-shift model with “local” control at the instance (instead of database) level. #
With respect to tuning, the DTU calculator tool looks useful. Also, wtf is parameter sniffing? Additionally, Jes talked about things at the logical SQL server level with respect to Azure SQL Database. I’d like to understand more abou tthat concept. #
Analysis Services session was probably good for analysis services developers; not that useful for me personally. #
Learned about WPS’s remote work push, which seems to have come a long way in just a few years. They’ve been moving to remote first for about 2 years and have a ton of their current workforce working remotely. #
#Finally have a working prototype scaleset for Elasticesearch 1.7 running in Azure. Oddities:
- While the config trigger is being handled in terraform with a VM Extension, it still relies on a Bash script running in the context of the VM being created to set itself up.
- The config that’s executed by the VM Extension only approximates dynamic configuration by looping over some passed-in variables. Better than nothing, though.
Hopefully Consul will be able to address the service discovery concern, since right now the ES zen discovery addressing is dynamically hard-coded. It’s possible there’s a way to handle that “internally”, but so far I haven’t been able to find a way to access scaleset instance information during the startup process, and so am needing to gather that information in Bash when the VM starts up. #
TIL about Bash’s greedy variable parsing courtesy of krallja; this means that in the context of a Bash substitution varible like $host$prefix0000$counter, $prefix0000 is parsed as a whole variable instead of as a variable followed by a string. On the other hand in $host$prefix-0000$counter, $prefix is a variable since “-“ isn’t valid in a variable name. You can also resolve this like $host${prefix}0000$counter. #
Still running into problems with the Elasticsearch scaleset. With the new image using the init script in root’s cron on reboot, the mount command to attach the local assets folder to the azure file storage doesn’t happen. I can
t tell if it’s just not getting cxalled or if its failing in some way. When it’s run at the command line everything works like a charm, but when run at startup the templating eval commands generate empty .yml and defaults files bercause there’s no source file.
So I switched, at least temporarily, back to the vm extension, but get the same result when running the script with sudo sh test.sh. Removing the sh from the extension’s commandToExecute causes an error:
azurerm_virtual_machine_scale_set.es_ss: Long running operation terminated with status 'Failed': Code="VMExtensionProvisioningError" Message="VM has reported a failure when processing extension 'test'. Error message: \"Enable failed: failed to execute command: command terminated with exit status=1\n[stdout]\n\n[stderr]\nsudo: es-shell-test.sh: command not found\n\"."
In the end I needed to go with bash without the sudo; the templating is a bash feature, and I suspect the commandToExecutew runs as root. #
Continuing along the Elasticsearch Scaleset process; I’ve attempted the virtual machine extensions mentioned previously to little success. The expected files are created but are empty, and therefore don’t do the job. Probably some sort of permissions issue. I’ve moved on to trying to handle them with a startup script with somewhat better effect thus far. And in point of fact, a startup script is likely to be a better bet anyway, since (provided the startup script is delivered in the packer image) new scaleset members will start up with that script having run; it’s not clear that he VM extension would run for new scaleset members, and I strongly suspect it would not. I also discovered that the eval process I’m using for bash file-templating is … unuseful for some of the files - one bash-script-like file in particular has a number of env variable references in it like $ES_CLASSPATH or $ES_HOME which, when procesed through the eval process, end up empty and making the file invalid. I had to switch to using sed to do the replacement(s) I needed in that file. #
Of course all of this is till running into startup problems, maybe at least partly to do with not correctly handling (or not properly ignoring) failure modes? And the challenge of running it @reboot is that if it fails the system, doesn’t boot. I’m not 100% sure why this is; most of what I’m doing in there shouldn’t be relevant to system startup (aside from whether or not Elasticesearch can start, which is a separate issue and shouldn’t mean that the OS boot would fail). #
Deallocating a scaleset instance seems to take a much longer time than I would have expected, as well, and oddly, for some script failures a deallocation and start cycle seems to fix the issue. #
I’ve been looking at using an alert filter in Azure connected to a webhook to publish notifications (perhaps to Slack). Might be useful not only for knowing when a long-running operation is finished but also for tracking other events. Interestingly enough the Administrative event fiolter also published activity from the Azure CLI (and presumably from Azure-PowerShell). It’ll be interesting to see if it also catches terraform activity. #
#Still proceeding on updating my dotfiles, albeit slowly. I’ve more or less settled on Anish’s pattern, probably using his DotBot project. That will require getting to know git submodules better. I’m also planning on playing with git-secret to be able to store my local identity config in GitHub as well. Since I’ve recently upograded to the paid plan I may as well try and make use of the private repos a little more, and with git-secret and a private repo together that will probably suffice. #
I’m also dithering about how to store more extensive info like dotstuff that’s stored in folders. Much of that probably belongs in submodules somehow as well. Also planning on decking out my vim a little more as previously noted, so that will have to come into it. I was planning on using vim-plug for that but I’ll need to make sureit supported installing from submodules if I’m going to go that route (as it seems to do). #
On the Elasticsearch scaleset front, since multicast isn’t an option I’ll have to do the cluster config some other way. It seems like using Azure Virtual Machine Custom Script Extensions (in terraform prompted by this quickstart) might be the route to go; that seems like it might be a useful replacement for the remote-exec work I was doing in the individual vm set up. #
TIL that GitHub gist raw urls change when a gist is revised. This makes sense from a git point of view (as I suspect that the gist is just a small git repo on the back end) but I’m not sure whether that seems reasonable from the front end. #
Another interesting TIL, it seems like when using terraform to apply a scaleset that relies on a virtual machine extension using a remote file (I’m currently testing with a gist), if there’s an error in the remote file, reapplying after fixing the problem in the remote file doesn’t seem to be sufficient. I’ve had to terraform destroy followed by a fresh terraform apply to get it to piuck up the changes. ~Not quite sure why yet.~ Apparently changes to the scaleset definition (including the fileUri for a vm extension script) don’t dirty the scaleset, so it doesn’t get picked up by the state refresh. Not sure exactly why this happens, but…. #
I’m refreshing my dotfiles configuration, which I haven’t been keeping up for quite some time. I’m digging into some vim enhancements (plugins and the like) since I’m using vim a lot more these days, and that’s prompted me to look at sharing those configurations among machines; thus circling back around to my dotfiles repo and rethinking how I was handling all of that. Also my efforts to get a useable Glitch console config will be well-served with some thought around how best to organize things. Especially since I switched to Oh-My-Zsh for console config it hadn’t seemed particularly useful to maintain separate dotfiles, but I’m definitely seeing where it may once again bbe useful. Additionally I may be sending my work MacBBook out for repair, so being able to get another machine up and running quickly would be helpful. #
Digging around online for ideas about how to organize them I quickly stumbled across the dotfile GitHub page, and specifically the tutorials section, and am reading through those. Lots to go through there, but so far Anish’s process looks pretty sensible, and DotBot in particular looks useful. #
#While thinking about trying to keep this up again I’ve bbeen puzzling about post tags. Currently these aren’t supported out-of-the box by Jekyll, and any plugins that might provide the functionality aren’t supported by the GitHub Pages Jekyll workflow, so I’d have to build my site locally and push the static files to the repo to use anything like that and I’m still not ready to do that. A couple of folks have documented how they went about handling it, though, notably:
Of these right now the minddust approach seems the most straightforward and aside from having to generate the category and tag files manually seems pretty simple. To bbe fair both of the other models require some file creation, and that should be easily automated, bperhaps even hooking into the jekyll build process. #
This week I worked on a couple of internal Glitch projects (well, internal to Fog Creek but externally visible) to enhance our recruiting process. I won’t link to them here but anyone who applies for one of our open positions will see them. In any case I ran into a couple of slightly interesting bits that I want to capture, mostly having to do with “new” Javascript hotness since I last regularly used JS (in, um, 2012 maybe?).
- Template literals rawk.
- The FormData object is super nice for quickly capturing an entire form’s fields for POSTing to an endpoint for processing. Saves a lot of boilerplate.
- If you’re using JQuery’s
.ajax()function or the like, you must remember to setcontentTypeandprocessDatatofalseor things won’t work properly.
- If you’re using JQuery’s
- When using the Google APIs (specifically the Sheets API, in this case), particularly when leveraging the Node.js client, using JWT Service Tokens is the best route forward. This took me entirely too long to figure out. It also took me a while to figure out that the service account created needed to be granted access explicitly. #
Rebooting the reboot. #
I never did write up the macro info I was talking about here so here goes.
I type daynumparagraphnum at the end of the “paragraph” for the marker (i.e. today is day 88, so this paragraph is 882), then exit edit mode, then type @x to execute the macro in register ‘x’. The macro itself is bdwa [#](#a^R")^[0i[](){:#a^R"}^[$:w^M which breaks down as follows:
bdwa= (b) go back one “word” (sequence of alphanumeric characters), (d) delete to the next destination, (w) next “word”, (a) switch to append mode- append ‘ [#](#a’ then paste last buffer (which is the numbers previously deleted) then append ‘)’
^\[0i- move to the beginning of the line and enter insert mode- insert ‘[](){#a’ and paste the last register (‘^R”’ - the previously deleted numbers and continue to insert ‘}’
^\[$:w^M- exit insert mode and jump to the end of the line, write the file and press ‘enter’
The characters ‘[](){:#axxx}’ insert an anchor tag with the id set to ‘a’ plus the paragraphnum, so <a id='a###' />, which allows the ‘#’ link at the end of the paragraph to navigate to the beginning of the paragraph. #
Still working on puppetizing cert installation on Windows, which is proving to be a slightly larger headache than hoped-for (although I should have expected it - as a co-worker said, “Puppet for Windows: Why Did You Make Us Build This? ™”). First thing I ran into is that Pupet logs its activity to the Application Event Log. So far so good. Then I discover that it logs the output of PowerShell command failures one line per event. So a response like
CertUtil: Unexpected "-csp" option
Usage:
CertUtil [Options] [-dump]
CertUtil [Options] [-dump] File
Dump configuration information or files
Options:
-f -- Force overwrite
-user -- Use HKEY_CURRENT_USER keys or certificate store
-Unicode -- Write redirected output in Unicode
-gmt -- Display times as GMT
-seconds -- Display times with seconds and milliseconds
-silent -- Use silent flag to acquire crypt context
-split -- Split embedded ASN.1 elements, and save to files
-v -- Verbose operation
-privatekey -- Display password and private key data
-pin PIN -- Smart Card PIN
-p Password -- Password
-t Timeout -- URL fetch timeout in milliseconds
-sid WELL_KNOWN_SID_TYPE -- Numeric SID
22 -- Local System
23 -- Network Service
24 -- Local Service
CertUtil -? -- Display a verb list (command list)
CertUtil -dump -? -- Display help text for the "dump" verb
CertUtil -v -? -- Display all help text for all verbs
is logged as 28 separate events. Ugh. #
I remain to be a little baffled here, though, even after having pieced together the error messages. I couldn’t figure out why the PowerShell command was rejecting the -csp option, which is demonstrably correct when run in PowerShell directly. Then I realized as evidenced by the output above and by the fact that subsequent events reject -importpfx as an invalid PowerShell command that the PowerShell interpreter is breaking the command up for some reason, and is reading certutil.exe -csp 'Microsoft Enhanced RSA and AES Cryptographic Provider' -p $password$ -importpfx $path$ as two (or more) separate commands, the first of which being certutil.exe -csp 'Microsoft Enhanced RSA and AES Cryptographic Provider'. Certutil then interprets that as a call without a verb, defaulting to the “dump” verb, which doesn’t take the -csp option. #
I hypothesized that perhaps something in the password, which contains symbols, was fracking things up, but changing the password to abcdefgh didn’t resolve the issue; so much for that. #
Instead, it appears to have something to do with the secrets expansion via string interpolation, because placing the password in plain text seems to work but interpolating it via "${secrets::shop_cert_pass}" fails. Plain text password is fine, ${secrets::shop_cert_pass} fails with PowerShell treating the command string as distinct commands, and $secrets::shop_cert_pass (i.e. without the curly braces) also is interpreted as distinct commands by powershell #
Finally sorted the root problem; the issue was with the way I was leveraging the puppet secrets module. We load secrets from local files on the puppet master, assign them to class variables in a secrets module, and then leverage the variables in our own classes. The variables get assigned the contents of the secrets files using file() or chomp(file()) variously. When the variable’s value is assigned using file() it includes a terminal newline, which, when interpolated in a PowerShell command definition, is interpreted as starting a new command, causing the behavior I described above. The final fix was to switch to using chomp(file()) instead. #
Other interesting things I ran into and / or learned:
- using the Windows Puppet Command Prompt to run
puppet agent --verbosewill echo the return values from the catalog’s command to the console (more or less obvious, I guess). Theexecresource’slogoutputmay also come in handy here. - running
puppet agentkeeps the agent runnning “in the console” so it continues to request the catalog from the master and apply it on whatever cycle it’s configured for. - in our environment, at least,
tail /var/log/daemon.log -fon the puppet master outputs the master’s activity, which helped solve some master-side problems (like permissions errors). #
I ran into some not-very-useful errors (among them messages along the lines of Not authorized to call find on /file_metadata) when trying to get the pfx file to the target server in puppet. What they ended up amounting to is that files are expected to be stored as a part of a module for the puppet:/// url syntax to work properly. Once I put the file in the right place on the puppet master then the file resource with source=>puppet:///... worked fine. There’s a little extra syntactical oddness there; the path isn’t a real on-disk path, it just describes where in the module structure to find the file. #
It occurs to me that I might be able to use a probot hook to update paragraphs to add links to them automatically when committing to the repo without resorting to my vim macro. I’m not really sure how the Jekyll hooks that build the site are related to checkins though and whether doing an update using probot and then recommitting will update the rendered resource, or if there’s some risk of an infinite loop. #
#Digging in a little further to puppetizing certs, and apparently the proper way to transfer files from the master to the agents is via modules. This matches what I found about the source attribute of a file resource; when constructed properly with a source file in the correct module-based directory it transferred successully. #
Additionally ensureing a file resource “absent” actually deletes it from the node. I was a little confused by that given what co-workers told me about elements not being removed when removed from puppet, but it occurs to me that that’s probably in reference to people being removed from files, say for permissions. That probably does not remove their permissions. #
While prepping for activating the Markdown/Spectre mitigations, a co-worked dug up this python library python-hwinfo for retrieving hardware data from linux systems (including remotely). I haven’t actually tried it but it seems pretty cool. #
- transfer the cert from my mac to the jump box:
scp -P #### -p %filepath% cori@jumpbox:/tmp - xfer again to the puppet master box
- move to the puppet secrets folder and set permissions and access #
Dug into and better synthesized some portions of the interactions between nagios and puppet in our environnment yesterday, including how secrets are managed. Also got a better feel for how some of the checks are executed; I’d been over them with team members before but working through it on my own helped solidify it. #
After getting my vim macro for paragraph links working I’m realizing that one thing that I’d like to look into would be putting the anchor for the link at the beginning of the paragraph. Should be doable with vim commands. The resuling link structure would look like the one on this paragraph and consists of [](){:#adaynumparanum} at the beginning of the paragraph and just [#](#adaynumparanum) for the link. #
Here’s a paragraph trying the new macro bdwa [#](#a")0i[](){:#a"}$:w.
Yep, much better. #
Interesting thing I did not know: C# 6 and above has string interpolation using ‘$’. So $"Hello, my name is {name} is functionally the same as String.Format("Hello, my name is {0}", name). Sweet. #
Started the UCI Objective-C Coursera course this morning; intro videos seem reasonably well-put together, as one would expect from Coursera. It’s on-demand, which is good. I do wish there were other Obj-C courses available. I guess it’s not too surprizing given that Swift if the “new” hotness for iOS development. #
Trying to figure out how to use a vim macro to handle portions of paragraph linking. I’d like to figure out how to yank the day/paragraph link id to a register and then use that in a macro. I’m part of the way there, but still have some details to work out, namely:
- how to yank the pattern I want. I was using
daynum-paragraphnum ("9-1")but vim doesn’t recognize that as a word. Maybe I’ll have to go todaynumparagraphnum (91)instead. - how to drop the contents of a registry into a macro. I’m using
^R"for the most recent yank, but that doesn’t seem to work. #
Let’s test my new macro, [#](#a^R"){:#a^R"}^[. #
Revision: bdwa [#](#a^R"){:#a^R"}^[. #
Oh yeah, baby, that’s a winner! I’ll have to write that one up in more detail. #
#In keeping with looking at ZSSRichTextEditor as a place to contribute and learn some new skills (and in keeping with my longstanding habit of signing up for online classes and never doing anything about them) I’ve signed up for an Objective-C and a Swift class. They’re both pretty small and maybe I can actually keep up with them? #
One of the things that’s interesting about ZSSRichTExtEditor is that the current maintainer is looking for someone to help out with the project without any takers. I wonder how the licensing and copyright would line up if Zed Said Studio was no longer the “sponsor” #
The tomfoolery with the Windows cert and programmatic access to the private key is surprizingly annoying, but I’m kind of looking forward to seeing about puppetizing it. #
#Watched the Glitch React Starter Kit video series this morning; it’s well done and approachable - a really nice overview. Parts of it seem not quite made for complete beginners to coding, but maybe that’s not really the audience; Glitch already offers a few learn to code apps. #
Tried steel cut oats for breakfast his morning; not really great, IMO. #
Among the things I’d like to learn (better) in 2018: node (finish the Wes Bos course, at least), python. Also look for an open source thing to contribute to; this looks promising (and which would give me the opporunity to learn some iOS development). #
#Fiddling around with using a GitHub repo with a Glitch app from the Glitch commandline. It’s a little bit of a pain, partly due to authentication from a container (that is to say that it would annoying to have to set up GitHub autnetication for every Glitch project). #
Yesterday’s cert problem’s still an issue; apparently the private key, while it’s associated with the certificate, isn’t accessible to .Net 4.5 and below because it’s stored using a CNG crypto provider, which can only be accessed by .Net 4.6 and above. We’ll need to convert the private key to a CSP provider. #
#Really having difficulty yesterday with a cert for signing some data; can’t get it installed on Windows 8.1 in such a way that the private key is available.#
My loose 2018 goal of at least one github commit a day fell to the wayside yesterday, although to be honest, blog post commits were already kind of a cheat. I need to get rolling on something tangible for learning something new that will fulfill that goal. #
As it turns out. after installing the cert into a Windows 2008 server in such a way that it recognizes the private key and then exporting the cert including the key to *.pfx format allows it to be imported in Windows 8.1 key included. Long way around but there you have it. #
#Expanding on my poking around on getting valid html IDs for reference links one thing I found is that [#](#a1){:#a1} will work while [#](#1a){:#1a} will not. In retrospect this makes perfect sense, since html IDs need to start wwith a letter. #
Digging a little deeper into things here, I note that by default GitHub Pages implementation of Jekyll uses the GFM parser for kramdown. That may explain why the kramdown reference link syntax doesn’t work; I’m going to switch my config to use the kramdown parser and see if that helps, although that may break backtick-fenced code blocks, which may be an unacceptable sacrifice. We shall see. #
On the code block front, I’m going to have to update the styling of the code blocks; they’re not sufficiently set off from the surrounding content when inline. Also the style of the single post pages leaves something to be desired…. #.
Well, that (the [#][a6] reference link syntax) didn’t work even with the syntax changes, and I hate the way the code blocks look (althought to be honest maybe that’s not the fault of the parser. In any case; reverting. I wonder if I can add a vim macro of some sort to do that for me? #
TODO: code block and single post styling.
#Apparently GitHub-flavored markup, or the kramdown implementation used by the GitHub Pages version of Jekyll doesn’t follow the reference anchor syntax. Per https://kramdown.gettalong.org/syntax.html#reference-links, [#][a1] should render a ref-style link, but that doesn’t work properly (seems like if you don’t follow the []() pattern you don’t get a link). Instead you can use [#](#a1){:#a1} leveraging kramdown’s span IAL syntax to get to the same thing.
So that gives me a way to manually create anchor tags to small posts (not as nice as the automatic paragraph linking that Dave Winer’s blogging tool (whatever it is right now) provides, but maybe good enough for the GH pages Jekyll engine process for now). I could probably do this “better” by using Jekyll locally and building my static pages and pushing them, but I’m not into that just yet.
However it does leave one problem: if I use the “a1” pattern for those links they’ll work great on a daily page (presuming I basically do one “post” per day) but will collide (and not link to the right place) on the listing page. Still bears thinking about. Also bearing thinking, do I want a “#” link for each paragraph, or for each untiltled series of paragraphs that comprises a “thought”? #
Another Thing I Learned: kissmetrics allows you to track behavior on arbitrary components without updating instrumentation in your app. As long as you have css identifiers on the components you want to look at you can set up reports on them without additional changes to your code. It’s a marketing tool but I wan’t aware of that piece of things. #
#Starting off the new year with a new “microblog” using Jekyll and the built-in GitHub Pages build pipeline. Hoping / planning to use it to capture Things I Learn and the like. #
First thing I learned this year: the uwsgi Python module publishes a uwsgi module into Python apps that are running under uWSGI. Knowing that you can check import uwsgi and if you get back an ImportError then you’re not running under uWSGI. This is a good way to run unit tests against a Flask application that has level 0 initialization code that only runs successfully on the server. #
