[Ben] Inference is dangerous. Information can be inferred that was never meant to be, with huge consequenses especially when trust and security are involved. How can we control this?
[Ben] FOL (First Order Logic) is too strict to model human logic, so we shouldn’t use it.
[Ben] Who is going to check the basic rules? When there’s a small mistake in the rules, the inferred mistakes might be extrapolated and therefor huge.
[Ben] Why are initiatives like RuleML and Triple necessary if we have OWL (DL) to reason with?
[Jan Jaap] Does the Semantic Firewall really make things secure? Isn't trust a big issue here too? How does one know that an agent keeps his promise concerning for example destroying private data?
[Jan Jaap] It seems to me that the TidalTrust algorithm consideres path length (path shortness actually) as being more important than the trust values assigned to the paths. Isn't it possible that a slightly longer path delivers a more accurate trust value?
[Swathi] How can the issue addressed by Joseph M. Reagle Jr. about groups or communities that dislike a person and thereby destroy that person’s reputation be resolved?
[Swathi] Trust is still a very subjective aspect. Even after using algorithms, such as the TidalTrust, can we really trust users' opinions and recommendations?
[Xavier] A good security level could be obtained by a good combination of a semantic firewall and local, conventional security measures such as encryption. However, at the moment it's exactly this combination that is causing many conflicts. A good balance should be found... but how? Who should work towards this? And will system administrators be willing to lower their security measures, putting their trust in the hands of a third party?
[Xavier] In the article by Golbeck et. al., a part is about the influence of "nearness" on trust. If A and B are very close to each other, they might share some believes and are likely to be willing to invest a similar amount of trust in C. It's argued that this might not be true for humans, but only for agents. Why does it work for agents then? And isn't the theory originally derived from social networks? Am a bit confused here....
[Xavier] About the article of Dolog et. al.: aren't user ontologies and observation ontologies more or less the same thing? Isn't it both about user behaviour, learning from it, storing data about it and acting on it?
[Xavier] One last point about the article by Reagle: how can I get such a key? After reading the article, the starting point of the whole process wasn't clear to me. It seems that someone should be monitoring the process? The PKI? And about the hashing, shouldn't there be a standard in this as well? My argument is that either this concept hasn't been discussed thoroughly enough, or it's just too premature to use at this point in time.
[Arno] Without a trust layer, the semantic web can never succeed.
[Arno] Semantic firewalls had better be compared to personalization systems than traditional firewalls.
[Arno] The Friend Of A Friend principle is never going to work, due to a lack of (freely accessible, public) information sources and lack of information about people in general.
[Bart-Jan] Predictions of trust in a social network can be improved by incoporating user characteristics. E.g. if John finds that older people are more trustworthy then those people will get a higher trust score.
[Bart-Jan] It is possible to infer trust without using cryptography for heavy-weight applications, like online banking.