Starting today, editors can use *<graph>* tag to include complex graphs and
maps inside articles.
*Demo:* https://www.mediawiki.org/wiki/Extension:Graph/Demo
*Vega's demo:* http://trifacta.github.io/vega/editor/?spec=scatter_matrix
*Extension info:* https://www.mediawiki.org/wiki/Extension:Graph
*Vega's docs:* https://github.com/trifacta/vega/wiki
*Bug reports:* https://phabricator.wikimedia.org/ - project tag #graph
Graph tag support template parameter expansion. There is also a Graphoid
service to convert graphs into images. Currently, Graphoid is used in case
the browser does not support modern JavaScript, but I plan to use it for
all anonymous users - downloading large JS code needed to render graphs is
significantly slower than showing an image.
Potential future growth (developers needed!):
* Documentation and better tutorials
* Visualize as you type - show changes in graph while editing its code
* Visual Editor's plugin
* Animation <https://github.com/trifacta/vega/wiki/Interaction-Scenarios>
Project history: Exactly one year ago, Dan Andreescu (milimetric) and Jon
Robson demoed Vega visualization grammar <https://trifacta.github.io/vega/>
usage in MediaWiki. The project stayed dormant for almost half a year,
until Zero team decided it was a good solution to do on-wiki graphs. The
project was rewritten, and gained many new features, such as template
parameters. Yet, doing graphs just for Zero portal seemed silly. Wider
audience meant that we now had to support older browsers, thus Graphoid
service was born.
This project could not have happened without the help from Dan Andreescu,
Brion Vibber, Timo Tijhof, Chris Steipp, Max Semenik, Marko Obrovac,
Alexandros Kosiaris, Jon Robson, Gabriel Wicke, and others who have helped
me develop, test, instrument, and deploy Graph extension and Graphoid
service. I also would like to thank the Vega team for making this amazing
library.
--Yurik
I've noticed that the image previews in Hovercards ('Popups' extension) do
not respect high-density displays and can end up a little blurry because of
this.
While patching the extension, someone recommended to me to bracket the
detected density to the values we use for default thumb generation on the
wiki (the 1, 1.5, and 2x densities we specify in 'srcset' attribute on
<img>s), so browsers that are zoomed slightly off from default or devices
that are not quite on the most common densities don't force extra thumbnail
renders.
Do folks have any preference for whether I should add that as a separate
function like $.bracketedDevicePixelRatio() or just directly bracket the
output of the $.devicePixelRatio wrapper function?
A quick look at code using $.devicePixelRatio() indicates most uses are
multiplying an image size to get a thumbnail, so that might be convenient
but I don't want to cause surprises. ;)
Task: https://phabricator.wikimedia.org/T97935
Core patch: https://gerrit.wikimedia.org/r/#/c/208820/
Hovercards patch: https://gerrit.wikimedia.org/r/#/c/208515/
Current version of the patch adds a separate $.bracketedDevicePixelRatio().
-- brion
I made a patch [0] for T39665 [1] about 6 months ago. It has been
rotting in gerrit since.
The core bug is related to glibc's iconv implementation and PHP (and
HHVM as well I think). To work around the iconv bug I wrote a little
helper function that will use mb_convert_encoding() instead if it is
present. in review PleaseStand pointed out that the libmbfl used by
mb_convert_encoding has some differences in the supported character
sets and character set naming [2] vs iconv.
I was hoping that someone on this list could step in and either
convince me to abandon this patch and pretend I never investigated the
problem or help design a solution that will plaster over these
differences in a reasonable way.
[0]: https://gerrit.wikimedia.org/r/#/c/172101/
[1]: https://phabricator.wikimedia.org/T39665
[2]: https://php.net/manual/en/mbstring.encodings.php
Bryan
--
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Sr Software Engineer Boise, ID USA
irc: bd808 v:415.839.6885 x6855
On Wed, May 6, 2015 at 12:13 AM Greg Grossmeier <greg(a)wikimedia.org> wrote:
> Quick general question: are you proposing this for pywikibot only? I
> think the answer is yes, just making sure.
>
> No, I'm proposing to do this in general. I never mentioned pywikibot as
the goal I just said I did a test in pywikibot and it worked well.
> <quote name="Amir Ladsgroup" date="2015-05-05" time="07:05:48 +0000">
> > Hey,
> > Github has a huge community of developers that collaborating with them
> can
> > be beneficial for us and them but Wikimedia codes are in gerrit (and in
> > future in phabricator) and our bug tracker is in phabrictor. sometimes It
> > feels we are in another planet.
> > Wikimedia has a mirror in github but we close pull requests immediately
> and
> > we barely check issues raised there. Also there is a big notice in
> > github[1], "if you want to help, do it our way". Suddenly I got an idea
> > that if we can synchronize github activities with gerrit and phabricator,
> > it would help us by letting others help in their own way. It made me so
> > excited that I wrote a bot yesterday to automatically duplicates patches
> of
> > pull requests in gerrit and makes a comment in the pull request stating
> we
> > made a patch in gerrit. I did a test in pywikibot and it worked well
> [2][3].
> >
> > Note that the bot doesn't create a pull request for every gerrit patch
> but
> > it creates a gerrit patch for every (open) pull requests.
> >
> > But before I go on we need to discuss on several important aspects of
> this
> > idea:
> > 1- Is it really necessary to do this? Do you agree we need something like
> > that?
> > 2-I think a bot to duplicate pull requests is not the best idea since it
> > creates them under the bot account and not under original user account.
> We
> > can create a plugin for phabrictor to do that but issues like privacy
> would
> > bother us. (using OAuth wouldn't be a bad idea) What do you think? What
> do
> > you suggest?
> > 3- Even if we create a plugin, still a bot to synchronize comments and
> code
> > reviews is needed. I wrote my original code in a way that I can expand
> this
> > to do this job too, but do you agree we need to do this?
> > 4- We can also expand this bot to create a phabricator task for each
> issue
> > that has been created (except pull requests). Is it okay?
> >
> > I published my code in [4].
> >
> > [1]: https://github.com/wikimedia/pywikibot-core "Github mirror of
> > "pywikibot/core" - our actual code is hosted with Gerrit (please see
> > https://www.mediawiki.org/wiki/Developer_access for contributing"
> > [2]: https://github.com/wikimedia/pywikibot-core/pull/5
> > [3]: https://gerrit.wikimedia.org/r/208906
> > [4]: https://github.com/Ladsgroup/sync_github_bot
> >
> > Best
>
> > _______________________________________________
> > Pywikipedia-l mailing list
> > Pywikipedia-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
>
> --
> | Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
> | identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
>
> _______________________________________________
> Pywikipedia-l mailing list
> Pywikipedia-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
Hello,
A quick reminder about Wikimedia Language Engineering team's IRC office
hour later today at 1430 UTC[1] on #wikimedia-office. Please see below for
the original announcement, local time, and agenda. We will post logs on
metawiki[2] after the event.
Thanks
Runa
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150505T1430
[2] https://meta.wikimedia.org/wiki/IRC_office_hours#Office_hour_logs
---------- Forwarded message ----------
From: Runa Bhattacharjee <rbhattacharjee(a)wikimedia.org>
Date: Thu, Apr 30, 2015 at 7:29 PM
Subject: [x-post] Next Language Engineering IRC Office Hour is on 5th May
2015 (Tuesday) at 1430 UTC
To: MediaWiki internationalisation <mediawiki-i18n(a)lists.wikimedia.org>,
Wikimedia developers <wikitech-l(a)lists.wikimedia.org>, Wikimedia Mailing
List <wikimedia-l(a)lists.wikimedia.org>, "Wikimedia & GLAM collaboration
[Public]" <glam(a)lists.wikimedia.org>
[x-posted announcement]
Hello,
The next IRC office hour of the Language Engineering team of the Wikimedia
Foundation will be on May 5, 2015 (Tuesday) at 1430 UTC on
#wikimedia-office. We missed a few of our regular monthly office hours, but
from May onwards we will be back on schedule.
There has been significant progress around Content Translation[1] and it is
now available as a beta feature on several Wikipedias[2]. We’d love to hear
comments, suggestions and any feedback that will help us make this tool
better.
Please see below to check local time and event details. Questions can also
be sent to me ahead of the event.
Thanks
Runa
[1] http://blog.wikimedia.org/2015/04/08/the-new-content-translation-tool/
[2]
https://www.mediawiki.org/wiki/Content_translation/Languages#Available_lang…
Monthly IRC Office Hour:
==================
# Date: May 5, 2015 (Tuesday)
# Time: 1430 UTC (Check local time:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150505T1430 )
# IRC channel: #wikimedia-office
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
After a lot of work, we're ready to provide a more sensible data layout for
format=json results (and also format=php). The changes are generally
backwards-compatible for API clients, but extension developers might have
some work to do. If your extension is maintained in Gerrit, much of the
necessary conversion has already been done for you (the major exception
being booleans that were violating the old API output conventions).
The general theme is that the ApiResult arrays now have more metadata,
which is used to apply a backwards-compatible transformation for clients
that need it and optional transformation so JSON output needn't be limited
by restrictions of XML. At the same time, improvements were made to
ApiResult and ApiFormatXml to hopefully make it easier for developers to
use.
Relevant changes include:
- Several ApiResult methods were deprecated. If your extension is
maintained in Gerrit, these should have already been taken care of for you
(with the exception of T95168 <https://phabricator.wikimedia.org/T95168>
where work is ongoing), but new code will need to avoid the deprecated
methods.
- All ApiResult methods that operate on a passed-in array (rather than
internal data) are now static, and static versions of all relevant data-
and metadata-manipulation methods are provided. This should reduce the need
for passing ApiResult instances around just to be able to set metadata.
- Properties with names beginning with underscores are reserved for API
metadata (following the lead of existing "_element" and "_subelements"),
and will be stripped from output. Such properties may be marked as
non-metadata using ApiResult::setPreserveKeysList(), if necessary.
- PHP-arrays can now be tagged with "array types" to indicate whether
they should be output as arrays or hashes. This is particularly useful to
fix T12887 <https://phabricator.wikimedia.org/T12887>.
- The "*" property is deprecated in favor of a properly-named property
and special metadata to identify it for XML format and for
back-transformation. Use ApiResult::setContentValue() instead of
ApiResult::setContent() and all the details are handled for you.
- ApiFormatXml will no longer throw an exception if you forget to call
ApiResult::setIndexedTagName()!
- ApiFormatXml will now reversibly mangle tag and attribute names that
are not valid XML, instead of irreversibly mangling spaces and outputting
invalid XML for other stuff.
- ApiResult will now validate data added (e.g. adding resources or
non-finite floats will throw an exception) and auto-convert objects. The
ApiSerializable interface can be used to control object conversion, if
__toString() or cast-to-array is inappropriate.
- Actual booleans should now be added to ApiResult, and will be
automatically converted to the old convention (empty-string for true and
absent for false) when needed for backwards compatibility. Code that was
violating the old convention will need to use the new
ApiResult::META_BC_BOOLS metadata property to prevent this conversion.
- Modules outputting as {"key":{"*":"value"}} to avoid large strings in
XML attributes can now output as {"key":"value"} while still maintaining
<container><key>value</key></container> in XML format, using
ApiResult::META_BC_SUBELEMENTS. New code should use
ApiResult::setSubelementsList() instead.
- Modules outputting hashes as
[{"name":"key1","*":"value1"},{"name":"key2","*":"value2"}] (due to the
keys being invalid for XML) can now output as
{"key1":"value1","key2":"value2"} in JSON while maintaining <container><item
name="key1">value1</item><item name="key2">value2</item></container> in
XML format, using array types "kvp" or "BCkvp".
I apologize for forgetting to announce this sooner. If developers need
assistance with API issues or code review for API modules, please do reach
out to me.
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation