tl;dr: Check out SpiderPig[0], a web frontend for deploying code
changes to Wikimedia’s MediaWiki.
—
What is SpiderPig?
SpiderPig is a web app that deploys code to Wikimedia’s production MediaWikis.
—
How do I use it?
Today, anyone who has deployed a backport in the past year[1] can log
in at https://spiderpig.wikimedia.org.
Follow instructions on Wikitech to learn how to use SpiderPig[2] or
request access[3].
Join Release Engineering for a deployment party 🥳 to try out SpiderPig:
*Mon, 12 May 2025* – *Thu, 15 May 2025* members of Release Engineering
will be in the #wikimedia-operations IRC channel for daily backport
windows to share the joy of SpiderPig:
- UTC afternoon backport window (13:00 UTC)
- UTC late backport window (20:00 UTC)
More details on the deployment calendar.[4]
—
Why?
During backport windows[5], deployers traditionally use our `scap
backport` command-line tool to ship code. Meanwhile, developers wait
on standby to check the code.
For many deployments, deployers punch in commands and relay
information to developers. SpiderPig eliminates the need to punch in
commands for simple changes, freeing deployers to focus on complex
changes.
SpiderPig is like showing up for a backport window to get a change deployed:
Enter a Gerrit change number in the search and click “Start backport.”
When prompted, check it on the staging servers[6].
Confirm that the change looks good on staging to make it live everywhere!
—
Thanks to:
- Alexandros Kosiaris for his partnership in getting SpiderPig into production.
- Lauralyn Watson, Eric Gardner, and other folks working on Codex for
their help integrating Codex and Vue.
- Simon Lyngshede and Moritz Mühlenhoff for their help integrating
with our single sign-on system.
- Ahmon Dancy for leading this project and the Release Engineering
team for all their work to make deployments better.
Tyler Cipriani (he/him)
Engineering Manager, Release Engineering
Wikimedia Foundation
[0]: <https://wikitech.wikimedia.org/wiki/Scap/SpiderPig>
[1]: <https://ldap.toolforge.org/group/spiderpig-access>
[2]: <https://wikitech.wikimedia.org/wiki/Scap/SpiderPig#Log_in_to_SpiderPig>
[3]: <https://wikitech.wikimedia.org/wiki/Scap/SpiderPig#Access_to_SpiderPig>
[4]: <https://wikitech.wikimedia.org/wiki/Deployments>
[5]: <https://wikitech.wikimedia.org/wiki/Backport_windows>
[6]: <https://wikitech.wikimedia.org/wiki/WikimediaDebug>
Hi!
We've recently noticed on Polish Wikiquote that abuse filters set to
prevent creation of accounts during waves of vandalism are leaky.
Some `autocreateaccount` actions do not trigger the filter, even though
they would satisfy the conditions. We have first observed this behavior
at the end of March (https://phabricator.wikimedia.org/T391096), but
these leaks seem to be present now as well.
The expected action was that filter prevents an account to be created.
Is it possible that introduction of SUL3 changed some aspect of filters
that prevent account creation? Has someone maybe also noticed a similar
issue on their wiki project?
Regards,
Marcin
User:Msz2001
Hey all,
This is a quick note to highlight that we've created the REL1_44 branch for
MediaWiki core and each of the extensions and skins in Wikimedia git [0].
This is the first step in the release process for MediaWiki 1.44.0, which
should be out in June 2025, approximately six months after MediaWiki 1.43.0.
The branches reflect the code as of the last 'alpha' branch for the
release, 1.44.0-wmf.28, which is being deployed to Wikimedia wikis this
week for MediaWiki itself and those extensions and skins available there.
From now on, patches that land in the main development branch of MediaWiki
and its bundled extensions and skins will be slated for the MediaWiki 1.44
release, unless specifically backported [1].
If you are working on a critical bug fix that will affect the code in the
release, once the patch has been merged into the development branch, you
should propose it for backporting by cherry-picking to the REL1_44 branch.
If you are working on a new feature, that should now not be backported. If
you have an urgent case where the work should block release for everyone
else, please file a task against the `mw 1.44-release` project on
Phabricator [2].
If you have tickets that are tagged for `mw-1.44-release`, please finish
them, untag them, or reach out to get them resolved in the next few days.
We hope to issue the first release candidate, 1.44.0-rc.0, in two weeks'
time, and if all goes well, to then release MediaWiki 1.44.0 a few weeks
after that.
[0]: https://www.mediawiki.org/wiki/Bundled_extensions_and_skins [1]:
https://www.mediawiki.org/wiki/Backporting_fixes [2]:
https://phabricator.wikimedia.org/tag/mw-1.44-release/
Best regards, -- Mateus Santos (he/him) Product Manager MediaWiki
Engineering Group
MediaWiki now supports adding context fields to all log events within the
current request. You can use
LoggerFactory::getContext()->->add( [ 'myCustomField' => ... ] );
to add a field to all subsequent logs in the request, or
$scope = LoggerFactory::getContext()->->addScoped( [ 'myCustomField' =>
... ] );
to do it within a given scope.
For some of the existing usage, see
https://gerrit.wikimedia.org/r/q/hashtag:%22global-logging-context%22
(For example, you can now filter Logstash to events logged in a specific
job class with context_job_type, or a given special page with
context_special_page_name.)
For more information, see https://phabricator.wikimedia.org/T142313
Hi all,
With MediaWiki at the WMF moving to Kubernetes, it's now time to start
running manual maintenance scripts there. Any time you would previously SSH
to a mwmaint host and run mwscript, follow these steps instead. The old way
will continue working for a little while, but it will be going away.
What's familiar:
Starting a maintenance script looks like this:
rzl@deploy2002:~$ mwscript-k8s --comment="T341553" -- Version.php
--wiki=enwiki
Any options for the mwscript-k8s tool, as described below, go before the --.
After the --, the first argument is the script name; everything else is
passed to the script. This is the same as you're used to passing to
mwscript.
What's different:
- Run mwscript-k8s on a deployment host, not the maintenance host. Either
deployment host will work; your job will automatically run in whichever
data center is active, so you no longer need to change hosts when there’s a
switchover.
- You don't need a tmux. By default the tool launches your maintenance
script and exits immediately, without waiting for your job to finish. If
you log out of the deployment host, your job keeps running on the
Kubernetes cluster.
- Kubernetes saves the maintenance script's output for seven days after
completion. By default, mwscript-k8s prints a kubectl command that you (or
anyone else) can paste and run to monitor the output or save it to a file.
- As a convenience, you can pass -f (--follow) to mwscript-k8s to immediately
begin tailing the script output. If you like, you can do this inside a tmux
and keep the same workflow as before. Either way, you can safely disconnect
and your script will continue running on Kubernetes.
rzl@deploy2002:~$ mwscript-k8s -f -- Version.php --wiki=testwiki
[...]
MediaWiki version: 1.43.0-wmf.24 LTS (built: 22:35, 23 September 2024)
- For scripts that take input on stdin, you can pass --attach to
mwscript-k8s, either interactively or in a pipeline.
rzl@deploy2002:~$ mwscript-k8s --attach -- shell.php --wiki=testwiki
[...]
Psy Shell v0.12.3 (PHP 7.4.33 — cli) by Justin Hileman
> $wmgRealm
= "production"
>
rzl@deploy2002:~$ cat example_url.txt | mwscript-k8s --attach --
purgeList.php
[...]
Purging 1 urls
Done!
- Your maintenance script runs in a Docker container which will not outlive
it, so it can't save persistent files to disk. Ensure your script logs its
important output to stdout, or persists it in a database or other remote
storage.
- The --comment flag sets an optional (but encouraged) descriptive label,
such as a task number.
- Using standard kubectl commands[1][2], you can check the status, and view
the output, of your running jobs or anyone else's. (Example: `kube_env
mw-script codfw; kubectl get pod -l username=rzl`)
[1]: https://wikitech.wikimedia.org/wiki/Kubernetes/Kubectl
[2]: https://kubernetes.io/docs/reference/kubectl/quick-reference/
What's not supported yet:
- Maintenance scripts launched automatically on a timer. We're working on
migrating them -- for now, this is for one-off scripts launched by hand.
- If your job is interrupted (e.g. by hardware problems), Kubernetes can
automatically move it to another machine and restart it, babysitting it
until it completes. But we only want to do that if your job is safe to
restart. So by default, if your job is interrupted, it will stay stopped
until you restart it yourself. Soon, we'll add an option to declare "this
is idempotent, please restart it as needed" and that design is recommended
for new scripts.
- No support yet for mwscriptwikiset, foreachwiki, foreachwikiindblist,
etc, but we'll add similar functionality as flags to mwscript_k8s.
Your feedback:
Let me know by email or IRC, or on Phab (T341553
<https://phabricator.wikimedia.org/T341553>). If mwscript-k8s doesn't work
for you, for now you can fall back to using the mwmaint hosts as before --
but they will be going away. Please report any problems sooner rather than
later, so that we can ensure the new system meets your needs before that
happens.
Thanks,
Reuven, for Service Ops SRE
I want to build something to monitor the [[WP:DYK]] system on enwiki. I want to look at the length of various queues: nominations, approved nominations, number of hook sets ready for publication, perhaps a few more. Update times will be perhaps as low as once per day, certainly no faster than once per hour. Initially all I want to do is graph these. Eventually I might want to do some alerting.
In the old days, I would just have a simple script that threw some numbers as statsd. Looking at https://wikitech.wikimedia.org/wiki/Prometheus, it looks like that translates into using the pushgateway, but it's far from clear what I need to do to set this up. The docs talk about puppet, and certificates. Can somebody walk me through the setup?