|21 Oct 2005 @ 19:01, by fleer. Internet|
A turn for the better for blogs and democracy More >
|16 Aug 2005 @ 14:00, by ming. Internet|
One weird thing with blog postings and google power is that sometimes one individual post becomes, like, THE place to go for a certain subject. Just because Google has made it show on the first page for that search term. I've had a few of those, where one post over many months attracted hundreds and hundreds of people, who think that this somehow is one of THE authoritative places to go. And they leave lots of comments, and a whole little community forms.
I did a post two years ago, which was just a few paragraphs about my own experience about almost falling for a Romanian e-bay scam in buying a non-existent computer. Still, today, if you search in Google for ebay scams, my article is number 5. And there are 255 comments so far. Lots of people have shared their experiences and tips there. Even the Romanian scammers have shown up, to occasionally taunt their victims, in Romanian.
And if you search for Yamashita Treasure, you'll see my post as number 2. Which was just a mention of a review of a book. But, there too, a whole little community has formed. 167 comments. Every day treasure hunters are sharing their tips, asking for help, etc. You know, they've found some interesting underground spot in the Philippines, and they need somebody with a ground radar, or explosives expertise, or whatever. And, hey, it was just one of my thousands of blog postings. I'm not even participating. Intriguing.
A blog posting is not particularly suited to this, even though it kind of works. The comments are one long thread, and it is easy to find out how to add a new comment. But, I mean, if one had known that that posting was going to be a central gathering place for that subject, maybe one would have put some more facilities there. Like, on my ebay scam page, it would make sense to list other resources that could be helpful. So, should I go back and change my original posting? Doesn't quite make sense to me either.
But maybe one could have some additional features available that might spring into action when a certain post transforms from being just a note written at some point in time into being a *place* that people go to as a resource. A beacon that gathers people around a certain subject. If it really is a gathering place then maybe it should be linked with more of a forum, or a wiki, or a bookmarking feature, and maybe it should display resources from other places on the same subject.
It is a little odd. Meeting in an old blog post is kind of like meeting on page 207 of "Moby Dick". Maybe everything ought to be a potential meeting place. More >
|25 May 2005 @ 01:31, by mre. Internet|
The Los Angeles Jewish-Muslim Email Dialogue on Human Rights has just concluded most wonderfully. Both the individual messages and the development through the five rounds of the dialogue are very impressive from the "Oneness that underlies all reality" of the first message to a specific five point agenda for Southern California of the last. See groupdialog.org/humanrights/results.htm to view the dialogue results. More >
| 23 Feb 2005 @ 21:34, by ming. Internet|
Last week Google offered to host part of Wikipedia's content. Yesterday Wikipedia was brought off line for some hours by a power failure and subsequent database corruption.
Now, I pay attention to those things not just because Wikipedia is a great resource that needs to be supported. But also because I'm working on a clone of it, and I've been busy downloading from it recently.
The intriguing thing is first of all that that even is a possible and an acceptable thing to do. Lots of people have put a lot of work into Wikipedia, and it is generously offered as a free service by a non-profit foundation. And not only that, you can go and download the whole database, and the software needed for running a local copy of it on another server. Because it is all based on free licenses and open source. And, for that matter, it is in many ways a good thing if people serve up copies of it. It takes a load off of their servers, and potentially others might find new things to do with it.
Anyway, that's what I'm trying to do. At this point I've mostly succeeded in making it show what it should show, which is a good start.
Even though the parts in principle are freely available, it is still a pretty massive undertaking. Just the database of current English language articles is around 2GB. And then there are the pictures. They offer in one download the pictures that are considered in the "Commons" section, i.e. they're public domain. That's around 2GB there too. But most of the pictures are considered Fair Use, meaning they're just being used without particularly having gotten a license for it. So, they can't just share them the same way. But I can still go and download them, of course, just one at a time. I set up a program to pick up about 1 per second. That is considered decent bahavior in that kind of matters. Might sound like a lot, but it shouldn't be much of a burden for the server. For example, the Ask Jeeves/Taoma web spider hits my own server about once per second, all day, every day, and that's perfectly alright with me. Anyway, the Wikipedia picture pickup took about a week like that, adding up to something like 20GB.
Okay, that's the data. But what the database contains is wiki markup. And what the wikipedia/mediawiki system uses is pretty damned extensive markup, with loads of features, templates, etc. Which needs to be interpreted to show it as a webpage. My first attempt was to try the mediawiki software which wikipedia runs on. Which I can freely download and easily enough install. But picking out pieces of it is quite a different matter. It is enormously complex, and everything is tied to everything else. I tried just picking out the parsing module. Which happened to be missing some other modules, which were missing some other modules, and pretty soon it became unwieldy, and I just didn't understand it. Then I looked for some of the other pieces of software one can download which are meant to produce static copies of wikipedia. They're very nice work, but either didn't quite do it quite like I wanted it, or didn't work for me, or were missing something important, like the pictures. So I ended up mostly doing it from scratch, based on the wikipedia specs for the markup. Although I also learned a number of things from wiki2static, a perl program which does an excellent job in parsing the wikipedia markup, in a way I actually can understand. It still became a very sizable undertaking. I had a bit of a head start in that I've previously made my own wiki program, which actually uses a subset of wikipedia's markup.
As it says on the wikipedia download site:
These dumps are not suitable for viewing in a web browser or text editor unless you do a little preprocessing on them first.A "little preprocessing", ha, that's good. Well, a few thousands lines of code and a few days of non-stop server time seems to do it.
Anyway, it is a little mindblowing how cool it is that masses of valuable information is freely shared, and that with "a little preprocessing" one can use them in different contexts, build on top of them, and do new and different things, without having to reinvent the wheel first.
But the people who make these things available in the first place need support. Volunteers, contributors, bandwidth, money. More >
|2 Feb 2005 @ 18:37, by ming. Internet|
I'm doing various little programming contract jobs at the moment. And it is remarkable to notice how much effort apparently is being spent on trying to abuse various shared internet resources. I.e. getting around the way something was intended to be used, for the sake of self-promotion. Like, somebody just asked me to do a program doing what Blogburner is doing. I said no, and gave the guy a piece of my mind, but I'm sure he'll find somebody else to do it. "Blog and Ping" they call it. It is essentially that you automatically set up a number of fake blogs at a site like Blogger and you automatically post a large number of regular web pages to them, pinging the blog update sites as you do it, pretending that you just posted something new on your blog. Of course exploiting the somewhat favored status that blogs have in search engines, and attracting traffic. Under false pretenses.
And that's just one of many similar project proposals I see passing by. There are obviously many people getting various kinds of spamming programs made. You know, stuff like spidering the web for forums and then auto-posting ads to them. Or automatic programs that sign up for masses of free accounts in various places. Or Search Engine Optimization programs that create masses of fake webpages to try to show better in the search engines. I don't take any of that kind of jobs, but it is a bit disturbing to see how many of them there are.
It is maybe even surprising how well the net holds up and how the many freely shared resources that are available can be viable. Another example. You know, there's the whois system that one uses to check the registration information for a domain, who owns it, when it expires, etc. Now, there's a business in trying to grab attractive domain names that for one reason or another expire. So there are people who set up servers that do hundreds of thousands of whois lookups every hour, in order to catch domains right when they expire, in order to re-register them for somebody else. Or any of a number of variations on that scheme. To do that you'll want to do maybe 100 whois lookups every second. And most whois servers will try to stop you from doing that, but having some kind of quota of how many you can do, which is much less. So, you spread the queries over many IP numbers and many proxy servers, in order to fool them. And the result is inevitably that a large amount of free resources are being spent, in order for somebody to have a little business niche.
At the same time I can see that part of what makes the net work in good ways is indeed that one can build on somebody else's work with few barriers. That one can quote other people's articles, borrow their pictures, play their music, link to their sites, use their web services, etc. And add a little value as one does so. And I suppose the benefit of generative sharing will outweigh the problems with self-serving abuse of what is shared. But it seems it also involves an continuous struggle to try to hinder abusive use of freely accessible resources.
Like, in my blog here. An increasing number of visits are phoney, having bogus referrer information, just to make a site show up in my referrer logs. No very good solution to that, other than if I spend server resources on spidering all the sites to see if they really have a link to here. More >
| 25 Jan 2005 @ 20:25, by ming. Internet|
So, I continue to have a bit of fun with that webcam thing I did. In part because there still are several thousand people coming by looking at it every day. So I add a few improvements once in a while.
Mikel Maron made the nice suggestion that one could establish the more precise location of the different cams collaboratively, and then one could maybe do fun things like having them pop up on a world map or something. So, I added forms for people to correct or expand the information on each location. Like, if they know the city, or the name of the building, company, bridge, or whatever, they can type it in. And while I was at it, I added a comment feature.
OK, so, presto, instant collaboration. Within a couple of hours lots of helpful (or maybe bored) visitors had figured out where a bunch of these places were, and they had typed them in.
But, at the same time, what is going on is that these webcams seem terribly interesting to Chinese or Japanese speaking people. 70,000 people came from just one Japanese softcore porn news site who for some reason linked to it.
But then there's a slight, eh, communication problem here. Or language problem. Or character set problem. See, I've set it up so that the forms where you leave comments or update the info can take Unicode characters. So if somebody wants to type a comment in Japanese, they should be able to do that. And some people do. But the explanatory text on my page is in English. And it seems that a large number of people don't really have any clue what any of it says, but they have a certain compulsion to type things into any field that they see. So, if there's a button that leads to a form where you can correct the city of the camera, they'll click on it, and they'll enter (I suppose) their own information. Or they say Hi or something. See, I find it very mysterious what they actually are writing. It is for sure nothing like English. But it isn't what will appear as Chinese or Japanese characters either. Rather, it looks to me like what one would type if one was just entering some random test garbage, by quickly running one's fingers over a few adjacent keys. But the strange thing is that dozens and dozens of different people (with different IPs) are entering either very similar, or exactly the same, text. This kind of thing:
Yeah, I can type that with 3 fingers without moving my hand from the keyboard. But why would multiple people type exactly the same thing?? Does it say something common in Chinese?
Now, we have a bit of a cryptographic puzzle here. Notice that "Facility" (the name of the field) has twice as many letters as "City". And "fdsfdfdsdsfd" has twice as many letters as "dsfdsf". Consider the possibility that somebody might think they're supposed to enter the exact word they see into the field. Like some kind of access verification. And they use some kind of foreign character input method that encodes Latin characters as one and a half bytes. If so, I can't quite seem to decode the system.
Or, are we dealing with some kind of Input Method Editor (IME) that lets people form Chinese symbols by repetitive use of keys on a QWERTY keyboard? Anybody knows?
This is a bit like receiving signals from some alien civilization. Where's the signal in the noise? How might these folks have encoded their symbols, and what strange things might they be referring to? Are they friendly? dsfdsf?
Otherwise, if anybody here actually speaks Chinese or Japanese, could you give me a translation, preferably into the proper character set, of a sentence like: "This is the information for the camera location. Please do not enter your own personal information here!" More >
|2 Dec 2004 @ 14:56, by ming. Internet|
As envisioned by the Rand Corporation 50 years ago. Yeah, they were pretty spot on. Except for I don't have that big steering wheel thing.
... Later: see comments. It is actually a fake photoshopped picture, pieced together from a submarine control panel and a few other items from other sources. But, hey, splendid work. More >
| 30 Nov 2004 @ 15:16, by ming. Internet|
Robin Good suggests a directory of freely re-distributable RSS feeds. Which is a fine idea. I'm not aware of any being in existence. Well, there are some nice directories of feeds, like NewsIsFree and Syndic8 where one can subscribe oneself to thousands of feeds and make one's own personal news portal. But can one mix and match from them to offer one's own feeds? Is the content really licensed for re-distribution? Mostly that's left vague. One might assume that if anybody is offering an RSS or Atom feed it is because they don't mind that one does whatever one feels like with them, but that isn't generally the case. The content is still in principle copyrighted, and various kinds of licenses might be implied.
Robin had a bit of an argument recently with another news site, as he took the liberty of creating an RSS feed of their articles as a service. Articles which are really just assembled from other public news sources. And they felt he was somehow bereaving them of income by stealing their content without asking. But why shouldn't he?
The answer should really be a directory of feeds with clear Creative Commons types of licenses. I.e. people would state explicitly whether it is public domain, whether they need credit, whether it can be used for non-commercial purposes, etc. Which cuts off a lot of red tape as you right away will know what you can do with it. And it opens the door for better tools for constructing custom feeds out of other feeds. The Algebra of Feeds like Seb Paquet called it. More >
|7 Nov 2004 @ 09:16, by swanny. Internet|
All the internets a stage..... an escape from the fetters of reality......
I...... we here be but some distant illusion and deception to enchant
and deceive and steal thy affection and coin and then leave thee
with some cold kiss..........
Our time eclipsed by time itself.
Enchant us you say.......
Save us from our dreary and dull existences and temp us with song and
dance and wine and woman and visions divine.
For what I ask?
Mere coin and affection...... mere time and attention......
For I to will fall to the axe of time and soon be no more.....
What would you have me do...... The Grand Hari Kari..... for it seems you
crave drama and insanity. This seems no place or stage of reason......
For by its very nature and inception seems stood to deceive and delude in the
blink of an eye every and all to whom it beckons.
For master it has none. Who is the master of this stage...... Let them speak
but now that we might hear........ Who has called this stage into existence....
Pray speak now............. that we might "know"......
................. What is its purpose ...... this stage ........ this stage.....
Would but this stage could but speak and reveal its designs then what story
and tale might it tell....... STAGE!!!!!! Speak stage that all might hear and know.....
Oh stage oh stage where fore art thou stage.....
Be you but dumb wood and nail and wire knotted and bell and whistles.....
Speak you then for the world doth listen........listen...... nay not all......
if everyone were listening ........ Stage..... I....... we seek from thee perhaps
a gift or two ...... Hath thee then gifts for us........ and L O V E.........
Oh stage.... oh stage hath thee a name at least then .......
then pray tell it to us now....... Stage what be thy name......
We are humanity and thou art stage but why dost thou not speak......
thou stigmergic philosophers stone and object of...... this discourse....... here there
and everywhere with feet on ground and stage on ground...... alas.....
sighs and exits.......
November 7th 2004
Earth More >
| 17 Aug 2004 @ 12:53, by ming. Internet|
"Microcontent" seems to be one of the buzzwords now. So, what is that, really?
Jakob Nielsen, interface guru, used it (first?) in 1998 about stuff like titles, headlines and subject lines. The idea being that first you might see just a clickable title, or a subject line of an e-mail, that you then might or might not decide to open. So, that title needs to be representative of the full thing, or you might not click it, or you'll be disappointed when you do. Microcontent (the title) needs to match macrocontent (the page, e-mail, article).
Now, that doesn't quite seem to be how "microcontent" is used nowadays. OK, on to 2002, Anil Dash says this, talking about a client for microcontent:
Microcontent is information published in short form, with its length dictated by the constraint of a single main topic and by the physical and technical limitations of the software and devices that we use to view digital content today. We've discovered in the last few years that navigating the web in meme-sized chunks is the natural idiom of the Internet. So it's time to create a tool that's designed for the job of viewing, managing, and publishing microcontent. This tool is the microcontent client. For the purposes of this application, we're not talking about microcontent in the strict Jakob Nielsen definition that's now a few years old, which focused on making documents easy to skim.
Oh, and an absolutely excellent article it is. It calls for the building of a client, a program that will allow us to consume and create microcontent easily. Not just aggregate it, but allow us to use it in meaningful ways. I.e. seeing the information how we want to see it, without having to put up with different sites' different user interface quirks. Good examples he gives at the time is Sherlock or Watson on Macs. You can browse pictures, movies, flight schedules, ebay auctions and more, all from the same interface, and without having to go to the sites they actually come from. But we're still not quite talking open standards for all that.
Today, microcontent is being used as a more general term indicating content that conveys one primary idea or concept, is accessible through a single definitive URL or permalink, and is appropriately written and formatted for presentation in email clients, web browsers, or on handheld devices as needed. A day's weather forcast, the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent.
What is needed is the semantic web, of course. Where all content has a uniform format, and is flagged with pieces of meaning that can be accessed and collected by machines. Isn't there yet. Many smart people are playing with pieces of it, like Jon Udell, or Sam Ruby. Or, look at Syncato. All stuff mostly for hardcore techies at this point. But the target is of course to eventually let regular people easily do what they find meaningful with any data that's available on the net. More >
<< Newer entries Page: 1 2 3 4 5 Older entries >>