Bitte stellen Sie Ihre Erinnerungen von Hildegard Dillon (Kleinn) - schreiben Sie unten.
Please add your memories of Hildegard Dillon (Kleinn) - post a Comment below.
My mother was born, Hildegard Kleinn, on October 3rd 1925 in Köln (Cologne) in the Germany that is nowadays called, the Weimar Republic. It is a place that is becoming better known nowadays because of the TV series "Babylon Berlin" but the young girl growing up in a professor's household would have seen little of chaos that swirled around in those days. When she was 8 years old, her country begin its transformation into the Third Reich. She grew up in a coastal city on the Baltic named Stettin, Nowadays that city is in Poland and is named Szczecin. At age 15, she became a Wandervogel and went off, all by herself, on a bicycle tour through the new territories of Germany that were formerly part of Poland. She told me of one night that she spent in a Polish castle and the noble family living there were having a ball, and loaned her a dress for the evening so she could attend.
One summer she went to a summer camp organized by the Hitler Youth. She was assigned to help in the kitchen cooking porridge for breakfast every day. One day, there was no fuel for cooking, so the lady in charge said, we will make do, and told her to mix raw oatmeal, sugar and cocoa. She made this chocolate porridge in big bowls, and the kids scooped it out into a bowl and ate it with milk. This recipe became a frequent breakfast of my own childhood in Canada.
Soon, however, she finished school and went to work in a camp near the railway yards, as a Red Cross nurse, because the war on the Eastern Front was beginning to turn and the trains bringing supplies to the boys on the front lines were returning with a steady stream of wounded, and of Soviet prisoners. She told me of one time when a trainload of prisoners was parked in the train yard calling out for food, which she understood because she had studied a year of Russian language in school. Meanwhile another trainload of milk cows were mooing in distress because they had not been milked. Hildegard organized some of the other nurses to take buckets and milk those cows, then they took the milk and gave it to the Soviet prisoners to drink. I often wonder if any of those prisoners returned home and told the story about the young German nurses and the fresh warm milk.
Of course, not all my mother's stories of the war years were so nice. She was a very observant person and very smart. Later in the war she lived with relatives in the Rheinland. Regularly there were planes bombing and strafing factories in the area. She had noticed a pattern. So one day, when the planes roared over the horizon, and her friend urged her to run and hide under the railway bridge for safety. She went the other way and lay in a ditch beside the road. She was close enough to see arms and legs fly through the air as the bombers made a direct hit on the railway bridge.
After the war. she found herself in the British zone and her school English came in very handy and got her a job as a stenographer for a British medical officer. There she completely mastered English and developed an accent so genuine that she could pass for an English woman on the telephone which allowed her to use the British army telephone system, that was strictly reserved for British personnel. As the new country of West Germany returned to normalcy, and people had jobs and could travel again, she joined a travel agency. First selling railway tickets and then selling holidays abroad sailing on the Holland America lines. One of the perks of this job was free travel, and she visited London England.
Like many Germans of that age, in 1954 she decided to emigrate for better opportunities in life. She bought passage to Australia, but shortly before she left, one of her girlfriends came in desperation because she had met an Australian doctor in the British army, and after he returned home, they wrote back and forth and now she wanted to marry him. But she could not afford tickets to Australia. Hildegard gave her the ticket, and instead got a cheaper one to Halifax in Canada. She then went to Toronto where some distant relatives in the Gieffers family lived, and got a job working in a plastics moulding factory. In my childhood we had piles of plastic combs of many colors and shapes, plastic pitchers and cups, all rejected at the factory for some minor defect.
She wasn't in Canada for long before she met a young Irish man, Frank Dillon, who had also emigrated to Canada after serving in the British Merchant Marine during the war. He was a driving instructor and he taught her to drive and get a licence. Then he married her. A bit more than a year later, after the first child was born, they left the city of Toronto and bought a house and 6 acres of land in the countryside near the town of Newmarket. It had a big old chicken barn and they started raising chickens and selling eggs. When battery farms took over the egg business and dropped the price below profitability they switched to raising pullets for meat. Much of the farm work was done by Hildegard because Frank spent his days at a coffee shop in Newmarket. He had his business phone installed there and taught people to drive. When he was out on a lesson, a waitress at the coffee shop answered the phone for him. This was the last half of the 1950s.
More kids came along, six in all, the farm was sold, and the family bought a former general store in Ravenshoe a few miles to the north. Unfortunately, in 1968, Frank died. Hildegard had made friends with the bank manager and knew that the accounts would be frozen until the will was processed. So she called them up and managed to make a withdrawal of enough money to keep things going for a while. She struggled to get the general store open again, became the official postmistress of Ravenshoe, Ontario, and looked after 6 kids. It was a tough time for her and so she sold the business and bought another country home, on about 3 acres near Orillia. She took a course at a local college to become a carer for mentally retarded people, and then got a job at the Ontario Hospital in Orillia. This was not a hospital, it was a government institution where mentally retarded people lived out their whole lives. Even though the government soon stopped taking in new children, Hildegard worked their to the end of her career, looking after the women that were already there and had known no other life.
All of her children inherited a bit of Hilde's wanderlust, and moved away from Orillia, so when she retired, she decided to wander a bit as well, and came to live in Vernon, BC. It was there that she finally went into a care home when she could no longer cope on her own, and when a new home opened in Armstrong, she moved there a couple of years later. That is my mother. Since I am the oldest child, I remember more of the earlier days, and also, when she was beginning to work at the Ontario Hospital, it was a learning curve for her, and she used to sit with me in the evening and talk about her past, and about the problems she ran into during her day's work.
Where Did Civilization Begin?
No, this blog is not about ancient history and prehistory. That just happens to be a personal interest of mine and I believe that a lot of what we think we know about beginnings, is less than accurate. Most of the time I will write about software and systems that are somehow related to the cloud. For some 10 years now I have been spending a good chunk of my time, learning about and trialing various tools, systems, etc.
Friday 24 August 2018
Tuesday 26 July 2011
Asynchronous GNU Readline
I've been playing around with async server tools in Python, writing a memcache clone in a couple of different ways. One version uses 0MQ REP/REQ sockets for the protocol and another version tries to clone the actual memcache protocol using asynchat/asyncore. For another project I wrote a command shell to use for testing purposes, and to exercise the API and peek into internals. However, that command shell was running standalone, not part of an application. Generally, if you write a command shell you will want to use GNU readline for input because things like up-arrow, line editing and Ctrl-R search make life simpler.
Unfortunately the Python library that wraps GNU readline is blocking, therefore it won't work in an async server. But, readline does have an async API as well, so I set about investigating how to use it from Python. There seemed to be two choices. First was to write a C module that wraps the async features, and second was to use ctypes and call libreadline.so directly. Of course I googled a bit to see if anyone had done it and that is when I learned about ctypesgen. This is a nice little tool which takes a library and its include files, and spits out a Python module using ctypes that enables a Python application to use the same API as a C program would. ctypesgen: A Pure Python Wrapper Generator for ctypes.
So I tried it out like so:
python ctgen/ctypesgen.py -lreadline /usr/include/readline/*.h -o ctreadline.py
The end result was ctreadline.py, a Python module that was all ready for use. It only took a short while to read the libreadline docs and knock together this simple test program
import ctreadline
import select
import sys
import atexit
runEnabled = True
nullstr = ctreadline.POINTER(ctreadline.c_char)()
def exitCleanup():
ctreadline.rl_callback_handler_remove()
atexit.register(exitCleanup)
def cb(ln):
global runEnabled
# you must use == in this comparison because ln is a C object
if ln == None:
runEnabled = False
ctreadline.rl_set_prompt("")
elif len(ln) > 0:
ctreadline.add_history(ln)
print ln
ctreadline.rl_callback_handler_install("async>> ",ctreadline.rl_vcpfunc_t(cb))
while runEnabled:
select.select([sys.stdin],[],[],0.001)
ctreadline.rl_callback_read_char()
print " "
It doesn't do much, just echo back what you type, but it does do it asynchronously using "select" so it will be pretty straightforward to integrate in a command shell program and any async server based on select. Don't forget to try out your favourite readline features when you run it, things like Ctrl-R to search back, up-arrow and line editing.
The thing that took the longest to figure out was that you cannot compare the return value from libreadline in the usual way. I generally write if ln is None: but ctypes seems to return a different instance of None so you need to use the equals signs.
It can be hard to track down an API reference for libreadline and I ended up using the one for DOS here on Delorie's site.
Unfortunately the Python library that wraps GNU readline is blocking, therefore it won't work in an async server. But, readline does have an async API as well, so I set about investigating how to use it from Python. There seemed to be two choices. First was to write a C module that wraps the async features, and second was to use ctypes and call libreadline.so directly. Of course I googled a bit to see if anyone had done it and that is when I learned about ctypesgen. This is a nice little tool which takes a library and its include files, and spits out a Python module using ctypes that enables a Python application to use the same API as a C program would. ctypesgen: A Pure Python Wrapper Generator for ctypes.
So I tried it out like so:
python ctgen/ctypesgen.py -lreadline /usr/include/readline/*.h -o ctreadline.py
The end result was ctreadline.py, a Python module that was all ready for use. It only took a short while to read the libreadline docs and knock together this simple test program
import ctreadline
import select
import sys
import atexit
runEnabled = True
nullstr = ctreadline.POINTER(ctreadline.c_char)()
def exitCleanup():
ctreadline.rl_callback_handler_remove()
atexit.register(exitCleanup)
def cb(ln):
global runEnabled
# you must use == in this comparison because ln is a C object
if ln == None:
runEnabled = False
ctreadline.rl_set_prompt("")
elif len(ln) > 0:
ctreadline.add_history(ln)
print ln
ctreadline.rl_callback_handler_install("async>> ",ctreadline.rl_vcpfunc_t(cb))
while runEnabled:
select.select([sys.stdin],[],[],0.001)
ctreadline.rl_callback_read_char()
print " "
It doesn't do much, just echo back what you type, but it does do it asynchronously using "select" so it will be pretty straightforward to integrate in a command shell program and any async server based on select. Don't forget to try out your favourite readline features when you run it, things like Ctrl-R to search back, up-arrow and line editing.
The thing that took the longest to figure out was that you cannot compare the return value from libreadline in the usual way. I generally write if ln is None: but ctypes seems to return a different instance of None so you need to use the equals signs.
It can be hard to track down an API reference for libreadline and I ended up using the one for DOS here on Delorie's site.
Saturday 16 July 2011
AMQP and Python
I've spent the last 4 months writing Python code for a system that polls a datasource for changes every minute, then feeds messages into an AMQP message queue. Then worker processes that are listening to the queue, pick up a message, process it, and go to the next message. These processes run forever (unless they crash or are killed). When I took on the task, there was a basic prototype that was basically a sort of finite state machine, so when I planned out my version, I was thinking state machines. As a result, I mapped out several states that a job had to pass through and wrote a process to handle each step. The state transitions were basically handled by a set of AMQP message queues so that when a process finished a message like "Record 272727 changed" they passed the same message onto another queue that was listened to by another process. Overall this was just an ETL application that Extracted data from a database server, Transformed it, and Loaded it into another database for the SOLR search engine. Aside from the unusual destination database it was not that different from any other Extract/Transform/Load application.
For AMQP we had already decided to use RabbitMQ but we had only installed an old version which is included in Ubuntu Linux. This old version did not support the management plugin and there were difficulties in getting our operations folks to accept a non-Ubuntu service package with a newer RabbitMQ, so I was looking for alternate ways to get access to RabbitMQ information. Since it was written in Erlang, I read up on what Erlang is, and the actor model of multiprocess computing that it implements. I realized that I could run code on another Erlang node, on the same machine or not, and talk to RabbitMQ as long as I had the cookie for the RabbitMQ server. I was able to write some simple code to talk to Rabbit and get its status, but getting data about a Queue was harder because inside Rabbit, each queue is managed by a process with no name. I haven't yet gotten to the bottom of it, but I did do some experimenting with a Python library called PyInterface that allows a Python program to emulate an Erlang node and interact with RabbitMQ.
Along the way, I realized that the architecture I had chosen for this Python was rather like the Erlang actor model, and that I really should have a supervisor process to manage all my workers, restart them if they hang or crash, and perhaps even manage the number of instances of a process. At this point, the supervisor (written in Python) just starts one of each worker except for a couple of bottlenecks where it starts 15 of one and 4 of another. When I crack the problem of interfacing Python to RabbitMQ, I will be able to monitor queue backlogs over time and increase/decrease the number of instances of workers on a particular queue. This will bring it closer to being a system that just runs forever and heals itself.
Of course sometimes things go wrong, and there are three failure queues that collect messages when that happens. The workers listening to these queues delay messages for a while, then resubmit to the start of the process chain for a retry. I used JSON object format for the messages, e..g. {"recordid": "272727"}, so it was easy to add some more attributes to messages before passing them to the next queue. Messages going into a failure queue get a reason added, and before resubmitting I add a retries attribute so that I can count how many times the message has gone through and failed. If a db server goes down, a message might make two or three round trips before it is up again. And finally, if there are too many retries, I punt the messages into a queue for human attention. Over time, with lots of testing, the number of messages in that queue has gone from 50% of all messages to a tiny number.
When I started I had a prototype with half a dozen Python scripts using three different Python AMQP libraries, pika, amqplib and kombu. In evaluating them I realised that all three had shortcomings and appeared to diverge, in different ways, from the AMQP model. In the end, I decided to stick with kombu over the amqplib transport in the expectation that I could switch transports if I needed to in the future. But in writing an MQ shell program to manipulate AMQP message queues, I realised that kombu's code was overly convoluted, so I wrote a shim layer over that. More recently I have started to rewrite the shim to run directly over amqplib but that only supports AMQP 0.8.1. I would rather use AMQP 0.9.1 and recently discovered two more libraries, pylibrabbitmq that wraps the C library, and haigha which started life as amqplib and which seems the most up to date AMQP library that adheres to the AMQP model.
In any case, I've decided to work on a complete rewrite from scratch of my MQ shell, this time using haigha for AMQP and plac (instead of cmd) for the shell framework. It will show up on my github site as soon as I have something that can publish messages to an exchange and subscribe to a queue.
For AMQP we had already decided to use RabbitMQ but we had only installed an old version which is included in Ubuntu Linux. This old version did not support the management plugin and there were difficulties in getting our operations folks to accept a non-Ubuntu service package with a newer RabbitMQ, so I was looking for alternate ways to get access to RabbitMQ information. Since it was written in Erlang, I read up on what Erlang is, and the actor model of multiprocess computing that it implements. I realized that I could run code on another Erlang node, on the same machine or not, and talk to RabbitMQ as long as I had the cookie for the RabbitMQ server. I was able to write some simple code to talk to Rabbit and get its status, but getting data about a Queue was harder because inside Rabbit, each queue is managed by a process with no name. I haven't yet gotten to the bottom of it, but I did do some experimenting with a Python library called PyInterface that allows a Python program to emulate an Erlang node and interact with RabbitMQ.
Along the way, I realized that the architecture I had chosen for this Python was rather like the Erlang actor model, and that I really should have a supervisor process to manage all my workers, restart them if they hang or crash, and perhaps even manage the number of instances of a process. At this point, the supervisor (written in Python) just starts one of each worker except for a couple of bottlenecks where it starts 15 of one and 4 of another. When I crack the problem of interfacing Python to RabbitMQ, I will be able to monitor queue backlogs over time and increase/decrease the number of instances of workers on a particular queue. This will bring it closer to being a system that just runs forever and heals itself.
Of course sometimes things go wrong, and there are three failure queues that collect messages when that happens. The workers listening to these queues delay messages for a while, then resubmit to the start of the process chain for a retry. I used JSON object format for the messages, e..g. {"recordid": "272727"}, so it was easy to add some more attributes to messages before passing them to the next queue. Messages going into a failure queue get a reason added, and before resubmitting I add a retries attribute so that I can count how many times the message has gone through and failed. If a db server goes down, a message might make two or three round trips before it is up again. And finally, if there are too many retries, I punt the messages into a queue for human attention. Over time, with lots of testing, the number of messages in that queue has gone from 50% of all messages to a tiny number.
When I started I had a prototype with half a dozen Python scripts using three different Python AMQP libraries, pika, amqplib and kombu. In evaluating them I realised that all three had shortcomings and appeared to diverge, in different ways, from the AMQP model. In the end, I decided to stick with kombu over the amqplib transport in the expectation that I could switch transports if I needed to in the future. But in writing an MQ shell program to manipulate AMQP message queues, I realised that kombu's code was overly convoluted, so I wrote a shim layer over that. More recently I have started to rewrite the shim to run directly over amqplib but that only supports AMQP 0.8.1. I would rather use AMQP 0.9.1 and recently discovered two more libraries, pylibrabbitmq that wraps the C library, and haigha which started life as amqplib and which seems the most up to date AMQP library that adheres to the AMQP model.
In any case, I've decided to work on a complete rewrite from scratch of my MQ shell, this time using haigha for AMQP and plac (instead of cmd) for the shell framework. It will show up on my github site as soon as I have something that can publish messages to an exchange and subscribe to a queue.
Friday 5 February 2010
And a long time passes
Up until now most of my blogging-like activities have been participating in some old-fashioned Internet mailing lists, a few web forums, and various other social media inside my employer's firewall. It has been interesting to compare the experience of using the different tools, to see what is the same and what is different.
Mailing lists are pretty much a free-for-all, generally with an archive that is accessed by date, more or less, and no real structure to the discussions except the shared topic, and whatever is hot today. For some things they are really good, but I've come to wonder why mailing lists have not shifted into a form that allows them to be accessed other than by email clients or a simple 1990's archive website structure.
Forums have more structure beyond the overall shared topic. They are generally subdivided into one or two levels of subforums and when you write something, you reinforce the filing system by choosing the appropriate subforum. And if you don't pick the right subforum, there is generally a moderator who will fix it up for you. This structured filing system makes it fairly easy to jump into a forum site and zero in on the postings that are of interest. However, if the shared topic of the site changes or grows, it can be difficult to adapt the forum structure other than by adding or deleting subforums.
Blogs on the other hand have both structure and malleability. If you tag all your postings and delete irrelevant comments, then a blog site is at least as indexed as a forum, and pretty easy to jump into. But it also has the same kind of time indexing that a mailing list archive has, so you can zero in on a certain time period if you want to. And if you want to change the structure of the existing postings, you can do this by editing the tags and/or adding new ones. Maybe you decided that BABY was not really an appropriate tag once your toddler is talking up a storm, because it draws too much attention to those postings. But you don't want to delete the posts, just de-emphasize. Just delete the BABY tag, or change it to a more general one like FAMILY. This kind of subforum merging is generally harder to do, or impossible with forums.
And then there are wikis. Some wiki engines have blog-like and forum-like features. Others lean towards collaborative document editing. And then there is Wikipedia which is much-imitated within corporate firewalls. I have contributed lots of articles and a few templates, and some help guidelines for my employer's wiki encyclopedia. It is ranked high by our intranet search engine so we use it to provide a few sentences about important intranet pages that would otherwise be lost in the noise. A user looking for info does a search, hits our wiki encyclopedia page, and finds a brief explanation with links to intranet sites or collaboration wiki pages with comprehensive info. If it is an acronym definition article, a template inserts automatic links at the bottom of the article to search Wikipedia, our collaboration wiki, or the whole intranet. The template also adds the acronym to categories such as Common Acronyms, Our Company Only, or Specialised Local Meanings. Note that I mentioned a collaboration wiki. We have a completely separate wiki installation for people to do collaborative work from building documents, to tracking projects, to publishing weekly reports. More people use that wiki than use the Sharepoint installation.
We also have an internal blog site but I mostly post comments on other people's blogs to encourage them to post more. A good blog is a dialog, and in a business environment you need people to take the lead and show everyone else that it is OK to comment on a blog, even if it is written by a senior manager. Now that we have 30-40 blog postings a day, and about a dozen new blogs being started every week, I've decided to switch from commenting to posting, but to do it out here on the Internet because I think that what I have to say is more useful to a general audience.
Much of my postings will be about technology but there will also be some about futures, and about society. And some wild ideas because I like to keep my out of the box thinking engine fully tuned.
Subscribe to:
Posts (Atom)