为了增强传统Petri网的建模能力,将随机时间与库所关联,提出了SPPN(Stochastic Places Petri Net-随机库所Petri网),并基于SPPN分析了任务间的关系。对网上企业销售系统,采取从单个组织到组织合成的方法,应用SPPN和逻辑Petri网进行建模,并给出了逻辑工作流网模型的可达图的构造算法,分析了模型的正确性。
A novel layered method was proposed to solve the problem of Web services composition.In this method,services composition problem was formally transformed into the optimal matching problem of every layer,then optimal matching problem was modeled based on the hypergraph theory,and solved by computing the minimal transversals of the hypergraph.Meanwhile,two optimization algorithms were designed to discard some useless states at the intermediary steps of the composition algorithm.The effectiveness of the composition method was tested by a set of experiments,in addition,an example regarding the travel services composition was also given.The experimental results show that this method not only can automatically generate composition tree whose leaf nodes correspond to services composition solutions,but also has better performance on execution time and solution quality by adopting two proposed optimization algorithms.
This paper proposes a biometric-based user authentication protocol for wireless sensor networks (WSN) when a user wants to access data from sensor nodes, since WSN are often deployed in an unattended environment. The protocol employs biometric keys and resists the threats of stolen verifier, of which many are logged-in users with the same login identity, guessing, replay, and impersonation. The protocol uses only Hash function and saves the computational cost, the communication cost, and the energy cost. In addition, the user's password can be changed freely using the proposed protocol.
With the rapid development of the Internet, general-purpose web crawlers have increasingly become unable to meet people's individual needs as they are no longer efficient enough to fetch deep web pages. The presence of several deep web pages in the websites and the widespread use of Ajax make it difficult for generalpurpose web crawlers to fetch information quickly and efficiently. On the basis of the original Robots Exclusion Protocol(REP), a Robots Exclusion and Guidance Protocol(REGP) is proposed in this paper, by integrating the independent scattered expansions of the original Robots Protocol developed by major search engine companies.Our protocol expands the file format and command set of the REP as well as two labels of the Sitemap Protocol.Through our protocol, websites can express their aspects of requirements for restrictions and guidance to the visiting crawlers, and provide a general-purpose fast access of deep web pages and Ajax pages for the crawlers,and facilitates crawlers to easily obtain the open data on websites effectively with ease. Finally, this paper presents a specific application scenario, in which both a website and a crawler work with support from our protocol. A series of experiments are also conducted to demonstrate the efficiency of the proposed protocol.