台湾大学“知识组织”课件参考- –

因要给研究生班开设”知识组织与元数据”课程,系里没有指定教材,目前似乎也没有合适的教材,最近在准备课程内容时发现台湾大学咨询学系(也就是陈雪华教授那里)2003年就开设了类似的课程,名为”知识组织”,且所有课件都可以下载,狂喜。(参见http: //ceiba3.cc.ntu.edu.tw/course/cb9879/)。

看了台大的课程内容, 总的感觉,台大的”知识组织”更加偏重”知识管理”中所需的知识组织,也就是说时下比较热门的、用于许多知识型企业(咨询公司、 IT 研发企业等)的知识组织,而不是源自于哲学认识论、逻辑学或者计算机科学中的知识表示和操纵。因而看起来像是图书馆学、计算机科学与管理学的交叉。内容非常丰富,也很实用,然而就学科体系来说略感凌乱,如果想通过这门课的教授整理一份教材,还需要下不少功夫。
而且毕竟是2003年以前的内容,”知识本体”这两年进展颇多,课程的资料略显陈旧。

北大要求给研究生上课不必详细讲授知识内容,面面俱到,只要有一个大纲,让研究生掌握框架,然后去自学,并且在实践中总结。台大的课件好像也不太符合这个要求,象是给本科生上课。但是我的教材内容还是要准备得尽可能详尽,讲授的时候可以灵活掌握。这样做一方面便于自己形成一些研究课题,也方便学生拿到课件后能够进行自学,并进一步选择研究方向。

重新看一下我准备的课件,元数据部分还是强调的太多,脱胎于元数据讲座,而不是从知识组织角度,更能讲清楚元数据的作用和来龙去脉。


Technorati : , , ,

信息哲学学习手记- –

“哲学就像黄昏才起飞的智慧猫头鹰,在自己的领地上盘旋,看是否有新的内容,而这次发现的是 ” 信息 ” 。”

–刘钢 《科技哲学的新范式》

然而事实上这些人却多为决策者,应验了孔老夫子所云:”劳心者治人,劳力者治于人”。

于是咱也来务务虚,看看信息哲学能够给图书馆,特别是数字图书馆带来些什么。

关于图书馆学的学科建设

库恩、拉卡托斯、费耶阿本德、波普尔等科学哲学(我国以前叫自然辩证法)大师对现代科学的形成与发展从逻辑实证主义、批判理性主义、无政府主义、历史主义、精致证伪主义等方面进行批判和阐释,曾为我们图书馆学的学科建设带来一盏盏明灯,虽然这些”物理中心论”而成就的科学哲学未必适合其它学科,照出来的横竖不是那么回事,也得不到普遍的承认。现在可好,连同科学哲学本身都开始遭到批判,说它没有取得任何可以证明的、预见性的成果,对科学本身造成的伤害大于建设,甚至有人将这些大师说成是”真理的叛徒”。而号称一个新的对于现实世界的解释,从信息角度进行的解释,由于信息技术的发展所带来的全方位的冲击而横空出世,有人期待整个哲学的研究范式将为之一变。

这个新的解释名为”信息哲学”( Philosophy of Information )。

追本溯源,”缺乏历史的学科将难免”没有收摄而行之不远””。信息哲学本源自西方三大哲学传统之一–“莱布尼茨 – 罗素传统”。而这个传统在历史上曾经遭到歪曲、肢解甚至遗弃。以”符号学”和”逻辑”为特征的”形式传统”与另两大哲学传统波拉图”古典传统”和康德”现代传统”有着截然的不同,而”符号传统”则提供了信息哲学源远流长的理论基础。

信息技术的发展以及量子信息论的引入真的使人们需要重新审视这个世界究竟是物质的还是意识的,或者干脆是”信息”的,(我们已经有波普这个哲学大师给我们定义了近似的”知识世界”),难道我们真要怀疑 Matrix 中人类城市 Zion 的真实性吗?

这不由得使人想到哲学这样一门号称对自然、社会和历史根本认识的科学,也一样可以不断地被玩弄。中性一点的话语便是:哲学也是需要不断发展的。或者说人类的认识也是不断进步的。 …… 何况我们图书馆学呢?

我们来看一下信息哲学的面目:

社会信息化极大地改变了人们的生产方式、交往方式、生活方式和思维方式,它与当今时代的其他变化一起,改变了哲学探索的背景和语境,提出了一些迫切需要哲学回答的新问题。一些学者力图从元哲学的层面把握它,将相关探索发展为一个研究纲领、一个独立的探究领域,以为传统的和新的哲学话题提供原创性方法论,为理解信息世界和信息社会提供概念基础提供系统论证。 ” 信息哲学 ” 涉及两个方面,即信息的本质研究及其基本原理,包括它的动力学、利用和科学的批判性研究,以及信息理论和计算方法论对哲学问题的详细阐述和应用。 ” 信息哲学 ” 的理论旨趣有以下四个方面:( 1 )核心。寻求统一信息理论,其基本问题就是对信息本质进行反思;同时对信息的动力学和利用进行分析、解释和评价,重点关注在信息环境中引发的系统问题。( 2 )创新。主要目的是为各种新老哲学问题提供一种新的视角,其中涉及诸多哲学领域。( 3 )体系。利用信息的概念、方法、工具和技术来对传统和新的问题进行建模、阐释和提供解决方案。( 4 )方法论。对信息和计算机科学与信息和通信技术及其相关学科中的概念、方法和理论进行系统梳理,为其提供元理论分析框架。

( 刘钢”进入 21 世纪的中国科学技术哲学”,见: http://philo.ruc.edu.cn/pol04/Article/science/ s_digital/200409/1084.html )


Technorati : ,

火狐狸的新玩具:Piggy Bank- –

FireFox已经装了不少时日了,除了速度快一些,稳定性似乎好一些之外,没有感觉到特别的好处,有时还有一些网页好像支持得不太好。然而最近装了两个东东要改变这个看法了。

Wizz RSS 是一个博客浏览器。看天下用了一阵总会有一些问题,而且同时要用浏览器,感到不方便,这个Wizz作为一个SideBar放在浏览器边上,用起来方便多了,只是功能还不多,比如没有更新通知等等。
Piggy Bank是一个很厉害的东东,可以看成是通用的语义万维网的浏览器(哈哈,终于有了),以前只有一些专用的,例如FOAF浏览器等。知识目前语义网应用的实用系统还不多,但是试了项目(http://simile.mit.edu/piggy-bank/guide.html)提供的的一个应用,感觉非常不错:http://citeseer.csail.mit.edu/。这是个查询计算机研究论文全文的网站,Piggy Bank支持元数据下载存储、全文链接以及加注释的功能。非常好!!!

关于这个应用,Simile项目讨论组(general@simile.mit.edu)上很热闹,可以去看看。

下面转一篇别人的网志,关于这个软件的,我就不多废话了,供参考。

地址见:
http://lylejohnson.name/blog/2005/01/browsing-semantic-web-with-piggy-bank.html

Browsing the Semantic Web with Piggy-Bank
Piggy-Bank is a new extension for the Mozilla Firefox browser that allows you to easily browse the semantic data linked to from regular web pages. I've seen some other projects along these lines, but they tend to be focused on a particular flavor of RDF data (such as Joel's FOAFer extension, or Christopher Schmidt's DOAP Viewer extension ).

I'm still not quite sure how Piggy-Bank works, but at the least it's scraping web pages for any embedded links that have well-recognized types in the Semantic Web, such as “application/rss+xml” (for RSS feeds) and “application/xml+rdf”. It then follows those links and parses out the “information tidbits” from those sources, and presents that information to you in a sidebar. Piggy-Bank attempts to categorize the tidbits into high-level categories, such as “News” for RSS Channel and Item resources, or “Contacts” for FOAF's “Person” resources. You can save the tidbits of interest in a local database (“My Piggy Bank”) and search through them later; Piggy-Bank remembers the original source of the data and allows you to annotate them with comments as you desire.

In response to the question, “Why was [Piggy-Bank] built?” the developers offer the simple answer:


Technorati : , ,

语义万维网会成为什么样子- –

一直没有很好地看看 w3c 和 SW 的坛子( semanticweb@yahoogroups.com ),虽然内容局限了一点,但很多讨论对于我的论文还是有帮助的。我比较关注一些较为系统的长帖,尤其是比较宏观一些的问题。

关于语义网络会成为什么样子( How is Semantic Web going to look )最近有一些讨论蛮有意思:

首先一个叫 Rohan Abraham 的人问了一个很菜的问题( Sent: Friday, January 14, 2005 8:02 AM ),但是很菜的问题往往很本质,也是我们经常会被别人袭击的问题:

Can anyone tell me how semantic web is going to look in future?? Is all the HTML going to be taken away?? Or is RDF going to be along side with HTML.. Can any one answer the question and give me a link to the architecture of the Semantic Web. …

我很有兴趣看看 w3c 的大牛们怎么回答,我甚至以为可能牛人们不屑回答此类问题。很多此类问题在坛子里都悄无声息地沉了下去。

很快我们有个中国人有了第一反应,(当然属于在外国的假洋鬼子):”嗨,老李爵士的文章可以回答你的这个问题哦!” ( Hi, TB Lee's vision answers all.) 充分显示了我们中国人的见多识广和心地善良。

From: Jun Shen

Sent: Friday, January 14, 2005 8:07 AM

Hi, TB Lee's vision answers all.

接着有个据说跟随李爵士多年的查尔斯给出了他的看法。并告知关于此类问题李爵士也写过相当多的文章,还有许多聪明人补充他们的看法,并非常努力地工作试图证明给大家看,但是事情仍然是 ing 状态,所以 …

他的回答要点不外是:

下一代万维网并不取代现在的万维网,置标工具也是在进化、版本更新( HTML4 到 XHTML1 到 XHTML2 ,内置 RDF ),并不废除旧的。

当然他的举例让初入门者更加摸不着头脑:

From: Charles McCathieNevile

Sent: Friday, January 14, 2005 4:43 PM

Along with other kinds of XML already on the Web (SVG, MathML, VoiceXML starting to appear more, SMIL, etc – all W3C XML languages for purposes that HTML is no good for, and capable of including RDF) this is already appearing all over the place.

But it isn't something you see, except in the functionality. It is something meant to be read by the machines, so they they can present things that are more like we want them to look (cool documents with little floating asterisks and aliens, or browsers that can tell you HOW they figured out why a particular flight seems like a good deal, or images that can explain themselves through a voice system to a blind child, or whatever you want the web to do)

接着一个 MIT 的李爵士的学生,听说这个查尔斯跟随李爵士多年,希望商榷一个关于本体的问题,把这个帖子的主题带偏了。

李爵士认为本体应该通过一群人达成共识的过程来建立,而他的想法正好相反。他从人性论的角度认为达成共识是不可能的。有意思。

From: Shashi Kant

Sent: Monday, January 17, 2005 11:50 PM

I notice that you mention your involvement with TimBL… I am a grad student at MIT under Tim's supervision and we have regular debates about Ontology creation. As you are probably aware, Tim's view is that Ontologies should be created through a consensus approach- an “Ontology-by-committee” approach.

My view is exactly the opposite – I am a firm believer that such a consensual approach is a utopian pipedream. After all consensuses is, at the best of times, a very fickle entity. In fact I remember reading somewhere that when they got 3 domain experts in a single domain to create Ontologies, they only found about 30% commonality. And that is not even considering other typically human factors – egos (“is he really an expert?”), politics, and whatnots…

Plus it is impractical to assume that a corpus of Ontologies could be generated to accommodate the breathtaking rate at which information is being generated. I think it is just humanly impossible!

IMHO Ontologies are best generated using accepted machine learning approaches – sure they may turn out be at best 50% accurate, as compared to say a committee that takes 1 year to come up with an Ontology and spends millions of dollars to come up with an Ontology that is obsolete the moment even before it is created.

What are your thoughts on this subject? As a regular member of this board I would love to hear your thoughts on this matter.

接着有人建议他们私下里讨论吧,这个偏了的主题不具有普遍性。

一个莱比锡的德国人 Sören 却把这个问题深入下去。他首先赞同李爵士的”共识”论,认为人总是倾向于偷换概念,而绝对不能允许机器这么做(那天机器懂得这么做了就是人类的灾难了–科幻小说中的故事就是这么发生的),进一步他论述了一、二阶谓词逻辑和应用数学描述领域知识的重要性,并认为目前的一些进展值得夸耀。看来这也是个大师级的人物(至少也是跟李爵士多年的师叔级人物吧)

From: Sören Auer

Sent: Tuesday, January 18, 2005 12:25 AM

Seems reasonable to me too. People are only able to communicate since there is a consensus about what distinct words mean. Unfortunately people (sometimes) tend to have (slightly) different concepts in mind when communicating – that seems from time to time the reason for problems like divorce till even war. 😉
When machines are communicating we can't tolerate such misunderstandings. That's why I think there is strong need for a terminological knowledge representation like the one provided by SemWeb standards like OWL, which base on description logic and thus may support ensuring consistency and the other DL services.

To represent the whole (not only terminological) knowledge of a domain you have to use a knowledge representation at least as expressive as first order logic. Probably even second, since mathematics needs SO and which serious domain may live without maths? Unfortunately already FO logic has terrible computational caracteristics. AI communities try (more or less successful) to develop more efficient knowledge representation strategies here such as nonmonotonic resoning.
I think ontologies are not for representing all knowledge now lying around on webpages, but rather shall provide a grid to classify and maybe rearrange this knowledge, further to build common vocabularies for application systems to communicate (see WSMO, OWL-S). I think already this would be I gigantic achievement!

John Flynn 举了很多罗嗦的例子进行了一番类比:把本体的创建与网页的创建进行类比,认为本体是个多样性的世界,将会有好的本体和不好的本体,今后应该有”权威”本体,等等。

From: John Flynn

Sent: Tuesday, January 18, 2005 6:30 AM

I believe it is likely that ontologies will emerge much in the same way that html web sites and xml schema have evolved. Almost anyone can create an html web site but some become better accepted than others. Communities of interest evolve around almost every subject and out of those communities a few “authoritative” web sites emerge. For example, if you are interested in the subject of human resources there are many web sites that focus on that subject. The HR-XML Consortium provides a reliable set of xml schemas on various aspects of human resources that have been vetted by their large corporate membership. If you are interested in news you might naturally go to CNN, Google News, or one of the other widely recognized news web sites. If you are more adventurous you might try some of the news blogs as your news source. Over time selected web sites become known and accepted as providing mostly reliable information. This process will probably hold true for ontologies as well. Some ontologies will emerge as quasi standards, such as Dublin Core, and people will incorporate, modify and/or extend those ontologies as required to meet their needs. But, just as on today's public html web, there will be lots of junk ontologies posted and some ontologies created to intentionally mislead people. We will learn to deal with these just as we do with such html sites today. There will also be ontologies that are created and maintained by educational, commercial and government organizations on intranets. Basically, I don't see the growth and availability of ontologies as anything much different that what has been happening with html sites and xml schema.

又一个希望与李爵士有某种瓜葛的 Neil 先生感到这个主题非常有趣,就加入进来。他认为本体的创建确实如 Flynn 所言,不是绝对的,受市场驱动,介于完全形式化和非形式化之间,而且要做到纯学术的形式化是非常困难的。他提出一个”市场导向论”,认为经济性和迅速普及是本体是否能够生存下去的评判标准。复杂性和功能满足可以作为进一步完善的目标。

From: neil.mcevoy@ondemand-network.com

Sent: Tuesday, January 18, 2005 2:11 PM

I thought I'd join in at this point as its very interesting thread. I'd like to say I work with TimBL in some way, but I don't, in any way… 😉

I'm inputting from a business point of view, which I think like in many technical projects does feel to be missing from the semantic web discussions, and suggest it offers a few points and ideas. Prompted by agreement with John Flynn, in that I'm working on the basis that in general the production of ontologies will be a dynamic balance of formal and informal processes, mainly driven my market demand.
One would imagine that within a purely academic context, consensual methods would be more difficult because let's just say there is more appetite for absolute technical correctness and authority with more likelihood of egos and ivory towers etc. I'm quite sure if they wanted to they could stretch out the process for years! 😉
What business adds is the imperative to get something working quickly, and the understanding that it doesn't need to be perfect to be useful. Hence why I see the balance of the two; in the early days of domain development there will be much greater freedom to define and implement with less formal controls, enabling small domain teams to drive the first chunk and make it available. The point at which you need a committee approach is to enable it to scale and become universal. Quite simply for example, if you want all the big media companies to adopt a single framework, they will all need some form of equalised involvement in its development, or they won't play ball. Once you have a large cross-company team working from all over the world together, the only way to facilitate it will be via committee processes. The general idea that a committee doesn't work is not correct because we can see it can; check out VISA for example.
I'd also suggest that what business will offer is the simplicity to get things moving along. Although I'm sure it will get much more complex, all you need to start creating business value is the simple bits. For example, a tag for [Graphic designers] so that you can search the semantic web for [Graphic designers] in [London]. Hardly a massive ontology, but would actually enable lots of flow of commerce.
So it seems it's less so about the complexities of ontologies at this stage, and more about universal adoption and basic foundations, such as the DNS equivalent for registries etc. ie everyone agreeing that [Graphic designers] is the common method, so that we can move on to defining more complex elements.

一个意大利人 Dario 跳出来说了一个悖论:任何机器是无法达成共识的,必须翻译成人的语言。那么机器怎么知道是否翻译成人的概念体系了呢?

From: Dario Bonino

Sent: Tuesday, January 18, 2005 6:41 PM

I thought I'd join in at this point as its very interesting thread. I perfectly agree with Sashi about the process of ontology creation, however there is a point that it is not clear, wheter or not human knowledge and machine knowledge should have a contact point. In the last case I think that, at this moment, we are committed to the human classification. In other words, we could extract many clusters (or other, I don't know which is the exact term, sorry for my english) using LSI, or similar techniques but we also need a group of humans saying “ok, for a human being this cluster means that concept” at least with a certain degree of confidence… This is the biggest problem I think, the join point between human and machines. In my opininion, it doesn't matter where the join point is,
on the ontology rather than on mapping automatic extracted knowledge to human knowledge.
The problem is in that, if we want to deal with human beings we need humans to tell about what resources are… I don't know any machine thinking like humans, until now….

那个 MIT 的学生可能对于他的帖子中的文法错误感到不好意思,出来对着个话题作了一个很好的总结。看得出来这个后生还是有不少研究的,在这个领域。

1 他认为本体创建中机器、人工的参与比例应该为 8 : 2 ;

2 顶层本体可以为人创造,但领域本体可以完全由机器创建,并与顶层本体合并;

3 出于他的直觉,感到人创造的本体会给机器处理带来复杂性,于是他建议最大程度地利用机器创建本体,把人放在创建本体的流程中很不合理(按:这是一个被计算机科学毒害了的青年);

4 自动创建的本体即使只有 10% 可用,也比人工创建的好;

5 语义网之所以没有得到大的发展,都是因为本体创建太慢造成!!!

然后举了一大堆例子( MIT 数据中心的人怎么说 … ,这些人多么牛逼 … ,如果他们以及沃尔玛 / 戴尔等能够应用 S/W ,将使 S/W 成为 Kill App… ),强调说明他的第 5 点。

From: Shashi Kant

Sent: Tuesday, January 18, 2005 8:09 PM

Hello Charles and everyone for responding and making this an interesting discussion. IIRC this thread has turned out to be one of the most interesting on this forum for a very long time. First off, let me apologize for the poor grammar and typos in my last post …I was very sleep-deprived and tired..take pity on me I am @MIT 🙂

1. I largely agree with the positions that Charles, Dario et al have taken, that ultimately we may end up with a hybrid approach to Ontology creation – a combination of machine-generated with human-generated. If I were to hazard a guess… perhaps in 80/20 proportion.

2. I would take another guess at this and say that the majority of top-level Ontologies would likely be human-generated, and most domain-specific ontologies would be machine generated. Perhaps Aligned and/or merged with the top-level ones.

3. Another thing counter-intuitive about the idea of human-generated Ontologies is …after all the semantic web is about making the web machine-comprehensible, so why not automate the Ontology generation process to the extent possible? It just does not make sense to place humans in the middle of this process.

4. I would further argue that if someone were to come up with a good IR algorithm and feed the encyclopedia Britannica to it. The resultant Ontologies may be contain..say only 10% of the concepts/relations in that domain. But that's 10% (some might say 10^n %) better than nothing! Take Charles' example – “medieval European Recipes”. Unless someone really has a vested interest in creating a domain Ontology for medieval culinary art I would doubt anyone would ever bother creating one. I would be very surprised if DARPA or MIT or Stanford would fund a medieval cooking ontology creation committee.

5. The semantic web idea has been out there for quite a while now, but we don't really have very many Ontologies that can claim to be acceptably complete. Ontology availability is, IMHO (以愚之见) , the single biggest challenge of the semantic web and what's really holding the semantic web back. Unless you provide “real-world” applications (no hand-waving) for people to create Ontologies, they just cannot be bothered to do so. It's that simple.

Bottomline: One doesn't get more chicken-and-egg than this!
“It is unrealistic to believe that any independent body of academics or practitioners could formulate an all-inclusive canon that would stand the test of time. The ontology approach is a throwback to the philosophy of Scholasticism that dominated Western thought during the high middle ages. History has proven that canonical structures, meant to organize and communicate knowledge, often have the unintended outcome of restricting the adoption of further innovations that exist outside the bounds of the canon.”

That is how an MIT Data Center paper (www.mitdatacenter.org) puts it. While this opinion may be the other extreme of the spectrum, I think it sums up how the Walmarts, and the Dells of the world see the semantic web today. This is very unfortunate, because the semantic web badly needs the ballyhooed “killer app”, and the coming “data tsunami” because of RFID systems, sensor networks
etc. would have been a good, good one.

BTW MIT Data Center is an offshoot of the former MIT Auto ID center – the people who came with the EPC standards for RFID etc. So their buy-in would have been a huge boost for the S/web. It now looks they are going their separate ways – in fact they are even proposing a new modeling language called “M” (counterpart of OWL).

If you are interested I recommend reading up on their website – their contrarian viewpoint is fascinating.

Sören 又回过头来澄清一些问题,并给出了几个例子,看法比那些纯”计算机”头脑要现实、全面、理性得多,但是不知道是否能够说服那些机器脑子。国外著名大学的研究生们对于许多问题的理解好像也并不一定都很准确。

From: Sören Auer

Sent: Tuesday, January 18, 2005 9:45 PM

I'm a bit confused since all of you seem to understand Ontologies as a tool for arbitrary knowledge representation. As I mentioned in my last posting I don't think they are prepared to solve this task (especially if based on Description Logic as OWL).
Textual knowledge on websites contains so many vaguenesses, contradictions and exceptions. Humans can cope with them and sometimes it's even easier (for us synapse based reasoners) to get the spirit of an idea if it is described from contradictory viewpoints. But I'm quite sure machines won't be able to do the same at least within next 20 years or so.
Artificial intelligence research developed a variety of theories to make machines more intelligent in the human way. I'm not an expert in default reasoning, nonmonotinicity or horn logic, but my impression is that they are still far from being efficiently applicable. Description Logics and ontologies probably are a bit more mature but still there are many open problems (such as perspective reasoning, linking, merging, reconciliation, versioning). Even if all those problems are solved and if you manage to automatically generate ontologies from textual documents the benefit won't be much better than todays elaborated full-text searches, since DL can't (and is not intended) to cope with vaguenesses, contradictions and exceptions at all. And already one contradiction makes any further DL reasoning more or less senseless.

Already today quite much of the current web content is structured in proprietary database schema, xml-dialects. Here I think is the real impact of a terminological knowledge representation like OWL – defining globally shared, common vocabularies for distributed searching, view generation, querying, syndication of such structured data.

Projects in this context like – OWL-S/WSMO (description for automatic selection/composition of web-services),

– D2RQ (Treating non-RDF Databases as Virtual RDF Graphs)
– future (Semantic) WebApplications (you can have a look at my Powl
project for this – http://powl.sf.net) seem very promising to me.

For applications intended by the W3C you can have a look at the “OWL Web Ontology Language Use Cases and Requirements” document ( http://www.w3.org/TR/webont-req/).
Of course enriching arbitrary web pages with terminological classifications may be an application as well. But I think even this won't be possible automatically in a quality that gives us an real impact. But I'm open to conviction. 😉

Alex 又对解决文本知识的模糊性进行了展望,似乎技术还是可以解决这些问题的。看来这个话题还没有结束,让我们拭目以待。

From: Alex Abramovich

Sent: Thursday, January 20, 2005 6:10 PM

Yes, textual knowledge vagueness is a stumbling block of SW investigations. But it has an own nature that one can to make clear. What just is vague? A current operational context is uncertain. Nothing shall prevent us from building a library of operational contexts today!
An analysis of a sentence (based on this library) will derives a set of expectations of operational contexts. An analysis of subsequent sentences will confirm one of them.
It seems to me that something similar to this approach suggested Roger Schank (“Conceptual Dependency”).


Technorati : , , ,

关于OWL-S的服务描述- –

    服务发现是否假设请求者和提供者使用同一本体?肯定不是。否则 OWL-S 的使用会大大受限,甚至失去意义了,因为其目的本来就是为了寻找合适的本体,其前提假设就有问题。(但是是否假设使用同一本体编码语言呢?也不应该是,但是不一定)

      会对本体中介进行一些规范吗?就像 WSMO 对面向对象的中间件一样?虽然可以这样做,但是至今还没有这方面的研究计划和进展。 OWL-S 是采用 OWL 语言的一种服务描述语言,并不规定是否一定有中间件或者服务实现某些功能。

        是否仅包含输入描述和输出描述?答案也是否定的。 OWL-S 纲要是用于”广告”语义 Web 服务的,描述的内容包括服务描述、产品描述、输入、输出、前提条件、效果、存取条件、服务质量、安全参数等等,凡是与服务有关的参数均可以用 OWL-S 进行描述。

        来自于Katia Sycara” < katia+@cs.cmu.edu >的一帖关于owl-s服务描述问题的澄清,感到有必要存档一下。

        There were a number of issues raised in this discussion:

        1. Does OWL-S discovery assumes that requesters and provides use a unique ontology?

        The answer is NO. OWL-S does not assume the use of a single onrology. It is difficult, however, to see what you mean by “one single ontology”. If you mean “one single OWL file”, then of course trivially OWL-S does not assume a single ontology since you can import as many OWL files as you desire in an OWL-S description, and use any of the concepts defined in those files to describe the OWL-S profile or any other OWL-S component. During the discovery process the Profile of the requested service may refer to a concept, say a:A (the concept A defined in “file” a), and an advertisement may refer to concept b:B that belong to a different ontology (different owl files), and yet b:B may be defined as a subclass of a:A. In this case, matching engines would still be able to match exploiting the logical relation between A and B. At CMU, we have shown different kinds of matches (e.g. exact, plug-in etc) in our matching algorithm (see e.g. [3]).

        Another way in which the use of different ontologies can be handled in OWL-S is through mapping rules that could be expressed in SWRL. In this way, to the extent that the similarity between A and B can be made explicit, then the mapping can be exploited. Of course there are issues of where these mappings live, how it is discovered where they live, since of course in the process of service discovery one does not know a priori what the ontological needs of one request would be vis a vis the current advertisement knowledge base. Even if one assumes a unique knowledge base containing such mappings, another set of issues is of course, how this knowledge base gets searched efficiently. 我的论文的一部分就是要解决这个问题:采用登记服务自动完成映射工作,但是基于怎样的请求?机制仍然成问题。

        The issue of ontological mapping is an old and well known one that has predated Semantic Web Services. Work on how to express mappings to achieve semantic interoperability efficiently (even assuming the mapping rules are known) has been going on since the late 80&apos;s (perhaps even earlier).

        The general problem of arbitrary ontology mapping is an open research problem. The real scientific work is in trying to attack the technical issues that I outlined (and others that are there but I did not refer to). After we solve these scientific problems (ie. How to derive the mappings, and how to use them), we can worry about what to call the algorithms.

        Since ONTOLOGY MEDIATION is an open research issue, OWL-S is agnostic about the actual ontology mediation process used.( 这方面应该有研究论文,也可以参考一下,属于论文中创新性的内容 ) To the extent that the mediation process is a service, rather than a set of rules, it can be represented in OWL-S and discovered.

        2. Should OWL-S do something about ontological mediation like WSMO is doing with the OO mediators?

        Up to now, there is no clear operational definition of what a WSMO mediator is, neither is there a clear specification of an ontology or language for describing mediators, or an algorithm for ontological mediation.

        To the extent that WSMO mediators are services, rather than sets of rules, they can be represented in OWL-S by specifying what is their profile, process model and grounding (for a detailed discussion on this point see [2]). Furthermore, the discovery mechanism may then become similar to a composition procedure where you combine discovery of the appropriate mediator with the discovery of the appropriate service.

        Note that if you take this viewpoint, the sentence “OWL-S has no mediators” is non-sensical: it is analogous to sentences like “Java has no Operating System” or other such sentences. OWL-S is a language (it uses OWL semantics) that allows you to describe Web services, it does not tell you what infrastructure Web services need, nor does it stipulate the existence of mediators or of a discovery registry or any other component. If you think you need a mediator, the role of OWL-S is to provide you the tools to describe a mediator if you decide to implement it as a Web service. If you look at [2] there is a discussion on how to do that.

一些有关知识/信息组织的图书、论述- –

See follows:

知识 / 信息组织相关经典外文书籍和论述

Anderson, J. D. (2003). Organization of knowledge. IN: International Encyclopedia of Information and Library Science. 2nd . ed. Ed. by John Feather & Paul Sturges. London: Routledge (pp. 471-490).

Bade, D. (2002). The Creation and Persistence of Misinformation in Shared Library Catalogs: Language and Subject Knowledge in a Technological Era . David Bade, Urbana, IL: Graduate School of Library and Information Science, University of Illinois; (ISBN: 0-87845-120-X.)

Bliss, H. E. (1929). The organization of knowledge and the system of the sciences . By Henry Evelyn Bliss ; with an introduction by John Dewey. New York: Henry Holt and Company.

Bliss, H. E. (1934). The Organization of Knowledge in Libraries and the subject-approach to books . New York: The H. W. Wilson Company.

Bliss, H. E. (1935). A system of bibliographical classification . New York: The H. W. Wilson Company.

Capurro, R & Hjørland, B. (2003). The Concept of Information. Annual Review of Information Science & Technology, Vol. 37 , Chapter 8, pp. 343-411.

Dewey, J. (1929). Introduction. IN: H. E. Bliss: The organization of knowledge and the system of the sciences . New York, Holt (pp. vii-ix).

Feger, H. (2001). Classification: Conceptions in the Social Sciences. International Encyclopedia of the Social & Behavioral Sciences, Vol. 3, pp. 1966-1973 . Amsterdam: Elsevier Science, Ltd. (Online version with abstract published 2002)

Frohmann, Bernd. (1990). Rules of Indexing: A Critique of Mentalism in Information Retrieval Theory. Journal of Documentation , 46: 81-101.

Frohmann, B. (2003). Grounding a theory of documentation. Paper presented at DOCAM &apos;03 The first annual meeting of the Document Academy. August 13-15, 2003 at The School of Information Management and Systems (SIMS) at The University of California, Berkeley. http://thedocumentacademy.hum.uit.no/events/docam03.abstract s/bernd.frohman.html

Frohmann, B. (2004). Deflating Information. From Science Studies to Documentation . University of Toronto Press.

Furner, J. (2004). Information studies without information . LIBRARY TRENDS , V52, N3 (WIN), P427-446.

Hjørland, B. (2002). Domain analysis in information science. Eleven approaches – traditional as well as innovative. Journal of Documentation, 58 (4), 422-462.

Hjørland, B. (2002). Principia Informatica. Foundational Theory of Information and Principles of Information Services. IN: Emerging Frameworks and Methods. Proceedings of the Fourth International Conference on Conceptions of Library and Information Science (CoLIS4) . Ed. By Harry Bruce, Raya Fidel, Peter Ingwersen, and Pertti Vakkari. Greenwood Village, Colorado, USA: Libraries Unlimited. (Pp. 109-121).

HULME, E. WYNDAM. 1911a. Principles of Book Classification: Introduction. In: Hulme, E. Wyndam. Library Association Record, 1911; 13: 354-358.

HULME, E. WYNDAM. 1911b. Principles of Book Classification: Chapter II – Principles of Division in Book Classification. In: Hulme, E. Wyndam. Library Association Record, 1911; 13: 389-394.

HULME, E. WYNDAM. 1911c. Principles of Book Classification: Chapter III – On the Definition of Class Headings, and the Natural Limit to the Extension of Book Classification. In: Hulme, E. Wyndam. Library Association Record, 1911; 13: 444-449

ISO 5127: 2001 Information and Documentation – Vocabulary. International Standards Organization.

Miksa, F. (1998). The DDC, the Universe of Knowledge, and the Post-Modern Library­. Albany, NY: Forrest Press.

Richardson, E. C. (1930/1964). Classification: Theoretical and practical . New York: The H. W. Wilson Co., 1930. (Reprinted unaltered 1964)

Sayers, W. C. (1915). Canons of classification applied to “the subject” “the expansive”, “the decimal” and “the Library of Congress” classifications : a study in bibliographical classification method. London: Grafton & Co.

Smiraglia, R. P. (2001). The nature of “a work”: implications for the organization of knowledge. Lanham, Md.: Scarecrow Press.

Sowa, J. F. (2000). Knowledge representation : logical, philosophical, and computational foundations . Pacific Grove, California: Brooks/Cole.

Spang-Hanssen, H. (2001): How to teach about information as related to documentation. Human IT. 2001, (1), pp. 125-143. http://www.hb.se/bhs/ith/1-01/hsh.htm (Visited April 13, 2004).

Steen Larsen, P. (2003). Terms and definitions related to the information process, drawn from ISO 5127: 2001 Information and Documentation – Vocabulary. Unpublished paper.

Svenonius, E. (2000). The Intellectual Foundation of Information Organization . Cambridge, Massachusetts: The MIT Press.

Taylor, A. G. (1999). The Organization of information. Englewood, Colorado: Libraries Unlimited.

Thellefsen, T.L. (2002). Semiotic knowledge organization: Theory and method development. Semiotica, 142 (1-4), 71-90.

Webber, S. (2003). Information science in 2003: a critique. Journal of Information Science , 29(4), 311-330.


Technorati : , , ,

关于OWL-S应用的一些问题- –

关于 OWL-S 应用的一些问题(摘自 W3C 语义万维网讨论组 public-sws-ig@w3.org Evan K. Wallace 的一个贴子):

Eric Miller 在最近的一次会议上提到,许多软件公司对 OWL-S 的应用似乎比当初 RDF-S 和 OWL 来得迟缓,究其原因,大概是因为 OWL-S 目前还是一个 W3C submission 而不是推荐标准,正在讨论之中,变动还会比较大。另一方面好的工具比较少,参考文档和参考案例不多,也影响了应用。

实际上与 OWL-S 处于同一水平层次上的同类技术规范很多,例如 XPDL, BPML, BPE4WS, ebBPSS, BPRI, WMF, 以及 UML2 Action Semantics 等等。 其它更为形式化的如 PSL 和 SWSL 。 OWL-S 似乎并没有像 OWL 一样在同类语言中鹤立鸡群(特别作为概念建模语言方面)。 OWL-S 似乎没有吸收足够的同类语言的成果。


Technorati :

Ontology大牛Tom Gruber访谈- –

Dr. Tom Gruber&apos; s (Co-founder and Chief Technical Officer of Intraspect Software) Interview
For the Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, Volume 1, Issue 3, 2004

Tom Gruber (tomgruber.org) ,就是那个在 Ontology 最牛的牛,给 Ontology 下定义而被无数人引用的那个大牛,最近又说了一些很牛的话:

他说:”每个本体都是一个条约–一项社会要约–存在于想共享某些事物的人之间”( “Every ontology is a treaty – a social agreement – among people with some common motive in sharing.” )

他把 Ontology 分为形式化的、半形式化的和非形式化的,他认为形式化的本体会很难达成,会有很多限制,而半形式化的本体更有用,办形式化的本体:形式化的一半由机器来处理,半形式化部分给人读的。有意思。

The term “Semiformal Ontology” refers to a ontology which has a few bits of formality but is largely informal. It is the analog of what Tom Malone calls semistructured data, such as email or office forms. A semiformal ontology could support technology to processing of its formal parts but leaves it to the reader make sense of the informal parts.

Tom 认为 Ontology 工具(他的 Intraspec 公司正在干这个明堂)将对其应用带来很大便利,特别是对那些不懂技术的用户。

Tom 还认为半形式化本体由于能够结合上下文,会工作得很好。


Technorati :