worker_connections1024;# maximum number of connections that a worker process can handle concurrently
# multi_accept on; # enabling multi_accept can help improve performance under high load, but may increase the number of simultaneous connections that a worker process can handle
}
http{
##
# Basic Settings
##
sendfileon;# enable sendfile for performance optimization
tcp_nopushon;# enable TCP no-pushing
tcp_nodelayon;# enable TCP no-delay
keepalive_timeout65;# sets the timeout for keep-alive connections
types_hash_max_size2048;# maximum size of the types hash table
# server_tokens off; # disable server token (i.e., server signature) in response headers to improve security
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include/etc/nginx/mime.types;# include MIME types file
default_typeapplication/octet-stream;# default MIME type for unknown file types
##
# SSL Settings
##
ssl_protocolsTLSv1.2;# specify SSL/TLS protocols to use
ssl_prefer_server_cipherson;# prefer server ciphers over client ciphers
##
# Logging Settings
##
access_log/var/log/nginx/access.log;# path to access log file
error_log/var/log/nginx/error.log;# path to error log file
##
# Gzip Settings
##
gzipon;# enable Gzip compression
##
# Virtual Host Configs
##
include/etc/nginx/conf.d/*.conf;# include all configuration files in conf.d directory
include/etc/nginx/sites-enabled/*;# include all enabled sites configuration files
Displays a chatbot output showing both user submitted messages and responses. Supports a subset of Markdown including bold, italics, code, and images.
Preprocessing: this component does *not* accept input.
Postprocessing: expects function to return a {List[Tuple[str | None | Tuple, str | None | Tuple]]}, a list of tuples with user message and response messages. Messages should be strings, tuples, or Nones. If the message is a string, it can include Markdown. If it is a tuple, it should consist of (string filepath to image/video/audio, [optional string alt text]). Messages that are `None` are not displayed.
color_map:Dict[str,str]|None=None,# Parameter moved to Chatbot.style()
*,
label:str|None=None,
every:float|None=None,
show_label:bool=True,
visible:bool=True,
elem_id:str|None=None,
elem_classes:List[str]|str|None=None,
**kwargs,
):
"""
Parameters:
value: Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.
label: component name in interface.
every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
show_label: if True, will display label.
visible: If False, component will be hidden.
elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed.
Returns:
List of tuples representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information.
"""
ifyisNone:
return[]
processed_messages=[]
formessage_pairiny:
assertisinstance(
message_pair,(tuple,list)
),f"Expected a list of lists or list of tuples. Received: {message_pair}"
assert(
len(message_pair)==2
),f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
processed_messages.append(
(
#self._process_chat_messages(message_pair[0]),
'<pre style="font-family: var(--font)">'+
message_pair[0]+"</pre>",
self._process_chat_messages(message_pair[1]),
)
)
returnprocessed_messages
defstyle(self,height:int|None=None,**kwargs):
"""
This method can be used to change the appearance of the Chatbot component.
"""
ifheightisnotNone:
self._style["height"]=height
ifkwargs.get("color_map")isnotNone:
warnings.warn("The 'color_map' parameter has been deprecated.")
- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality. [[Blog post]](https://vicuna.lmsys.org) [[GitHub]](https://github.com/lm-sys/FastChat)
- Koala: A Dialogue Model for Academic Research. [[Blog post]](https://bair.berkeley.edu/blog/2023/04/03/koala/) [[GitHub]](https://github.com/young-geng/EasyLM)
- This demo server. [[GitHub]](https://github.com/lm-sys/FastChat)
### Terms of use
By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. The service may collect user dialogue data for future research.
### Choose a model to chat with
- [Vicuna](https://vicuna.lmsys.org): a chat assistant fine-tuned from LLaMA on user-shared conversations. This one is expected to perform best according to our evaluation.
- [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/): a chatbot fine-tuned from LLaMA on user-shared conversations and open-source datasets. This one performs similarly to Vicuna.
- [ChatGLM](https://chatglm.cn/blog): an open bilingual dialogue language model | 开源双语对话语言模型
- [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html): a model fine-tuned from LLaMA on 52K instruction-following demonstrations.
- [LLaMA](https://arxiv.org/abs/2302.13971): open and efficient foundation language models
Note: If you are waiting in the queue, check out more benchmark results from GPT-4 on a static website [here](https://vicuna.lmsys.org/eval).
""")
learn_more_markdown=("""
### License
The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
""")
css=code_highlight_css+"""
pre {
white-space: pre-wrap; /* Since CSS 2.1 */
white-space: -moz-pre-wrap; /* Mozilla, since 1999 */
white-space: -pre-wrap; /* Opera 4-6 */
white-space: -o-pre-wrap; /* Opera 7 */
word-wrap: break-word; /* Internet Explorer 5.5+ */