首页
畅所欲言
友情链接
壁纸大全
数据统计
推荐
工具箱
在线白板
Search
1
职教云小助手重构更新,职教云助手最新版下载地址【已和谐】
13,374 阅读
2
职教云-智慧职教,网课观看分析(秒刷网课)
10,986 阅读
3
gradle-5.4.1-all.zip下载
8,877 阅读
4
职教云-智慧职教,签到补签分析(逆天改命系列)
7,835 阅读
5
一个优秀的程序员从写文档开始:免费领14个月语雀云笔记会员
6,874 阅读
学习笔记
Web
Python
转载文章
算法刷题
JS逆向
综合笔记
安卓
物联网
Java
C
资源收集
软件收藏
网络资源
影视专辑
TED英语角
随便写写
随手拍
登录
/
注册
Search
Lan
累计撰写
624
篇文章
累计收到
617
条评论
首页
栏目
学习笔记
Web
Python
转载文章
算法刷题
JS逆向
综合笔记
安卓
物联网
Java
C
资源收集
软件收藏
网络资源
影视专辑
TED英语角
随便写写
随手拍
页面
畅所欲言
友情链接
壁纸大全
数据统计
推荐
工具箱
在线白板
搜索到
449
篇与
的结果
2020-03-11
天气API接口
已废弃API地址:http://t.weather.sojson.com/api/weather/city/101210101使用说明:地址最后的“101210101”代表杭州想要测试自己城市,把最后的101210101换成自己的城市代码(city_code)即可[ { "id": 1, "pid": 0, "city_code": "101010100", "city_name": "北京", "post_code": "100000", "area_code": "010", "ctime": "2019-07-11 17:30:06" }, { "id": 2, "pid": 0, "city_code": "", "city_name": "安徽", "post_code": null, "area_code": null, "ctime": null } ]
2020年03月11日
776 阅读
0 评论
0 点赞
2020-03-11
算法训练 乘法次数
资源限制时间限制:1.0s 内存限制:999.4MB问题描述 给你一个非零整数,让你求这个数的n次方,每次相乘的结果可以在后面使用,求至少需要多少次乘。如24:2*2=22(第一次乘),22*22=24(第二次乘),所以最少共2次;输入格式 第一行m表示有m(1<=m<=100)组测试数据; 每一组测试数据有一整数n(0<n<=100000000);输出格式 输出每组测试数据所需次数s;样例输入3234样例输出122import java.util.*; public class chengfacishu { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Scanner sc =new Scanner(System.in); int m =sc.nextInt(); int[] n = new int [m]; int temp; for (int i = 0; i < n.length; i++) { n[i] = sc.nextInt(); } for (int i = 0; i < n.length; i++) { int js=0; temp = n[i]; while (temp/2!=0){ if (temp%2==0) { js++; }else { js+=2; } temp/=2; } System.out.println(js); } } }
2020年03月11日
727 阅读
0 评论
0 点赞
2020-03-11
算法训练 找零钱
资源限制时间限制:1.0s 内存限制:256.0MB问题描述 有n个人正在饭堂排队买海北鸡饭。每份海北鸡饭要25元。奇怪的是,每个人手里只有一张钞票(每张钞票的面值为25、50、100元),而且饭堂阿姨一开始没有任何零钱。请问饭堂阿姨能否给所有人找零(假设饭堂阿姨足够聪明)输入格式 第一行一个整数n,表示排队的人数。 接下来n个整数a[1],a[2],...,a[n]。a[i]表示第i位学生手里钞票的价值(i越小,在队伍里越靠前)输出格式 输出YES或者NO样例输入425 25 50 50样例输出YES样例输入225 100样例输出NO样例输入425 25 50 100样例输出YES数据规模和约定 n不超过1000000一位累死在蓝桥杯的食堂阿姨;import java.util.*; public class Main { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Scanner sc = new Scanner(System.in); int n = sc.nextInt(); int result=0; //初始钱 int temp=0;//中介值 for (int i = 0; i < n; i++) { temp = sc.nextInt()/25; if (temp==1) { result+=1; }else if (temp==2) { result+=1; }else { result-=3; } } if (result>=0) { System.out.println("YES"); }else { System.out.println("NO"); } } }
2020年03月11日
840 阅读
0 评论
0 点赞
2020-03-10
一些信息查询入口
1.中国执行信息公开网访问网址:http://zxgk.court.gov.cn/?dt_dapp=1注释:输入姓名或shen/份证号就可以查到一个人的失信记录,借钱之前查看是否是老赖!2.全国标准信息公共服务平台访问网址:http://www.std.gov.cn注释:我们生活中各行各业的行业标准,包括行业标准、团体标准、国外标准,已废止的、现行的、即将实施的都能查到。3.征信中心(银行一般规定:一个月不要查询超过三次)访问网址:https://ipcrs.pbccrc.org.cn/注释:这个是用来查询自己的个人信用记录,违约、延迟还款和查询是否存在不良记录,信用不足会影响银行贷款等行为!4.中国裁判文书网访问网址:http://wenshu.court.gov.cn注释:输入你要查询的人物姓名,检索对方的名字,刑事案件,个人经济纠纷,债权债务信息一目了然!5.国家药品监督管理局访问网址:http://www.nmpa.gov.cn/WS04/CL2042/注释:输入检索项目搜索,凡是没有通过国家药监局备案的产品都是三无产品!6.商务部直销行业管理访问网址:http://zxgl.mofcom.gov.cn注释:可以查询所处单位是是传销还是直销!7.全国企业信用信息公示系统访问网址:http://www.gsxt.gov.cn/index.html注释:毕业学生找工作不知道公司靠不靠谱,可以上这里查询面试公司是不是正规公司。8.滚蛋吧!莆田系访问网址:https://putianxi.github.io/index.html注释:可以查询全国各地的莆田系医院!9.支付宝查个人婚姻状况查询方法:https://www.douban.com/group/topic/142322388/
2020年03月10日
881 阅读
0 评论
0 点赞
2020-03-09
猎聘网招聘数据
下载地址:https://file.lanol.cn/爬虫/
2020年03月09日
457 阅读
0 评论
0 点赞
2020-03-08
python3 + flask + sqlalchemy
python3 + flask + sqlalchemy +orm(1):链接mysql 数据库1、pycharm中新建一个flask项目2、按装flask、PyMySQL、flask-sqlalchemy3、项目下面新建一个config.py 文件DEBUG = True #dialect+driver://root:1q2w3e4r5t@127.0.0.1:3306/ DIALECT = 'mysql' DRIVER='pymysql' USERNAME = 'root' PASSWORD = '1q2w3e4r5t' HOST = '127.0.0.1' PORT = 3306 DATABASE = 'db_demo1' SQLALCHEMY_DATABASE_URI = "{}+{}://{}:{}@{}:{}/{}?charset=utf8".format(DIALECT,DRIVER,USERNAME,PASSWORD,HOST,PORT,DATABASE) SQLALCHEMY_TRACK_MODIFICATIONS = False print(SQLALCHEMY_DATABASE_URI)4、app.py 文件from flask import Flask import config from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config.from_object(config) db = SQLAlchemy(app) db.create_all() @app.route('/') def index(): return 'index' if __name__ == '__main__': app.run()执行app.py 文件,结果如下,表面执行成功from flask import Flask import config from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config.from_object(config) db = SQLAlchemy(app) db.create_all() @app.route('/') def index(): return 'index' if __name__ == '__main__': app.run()FLASK_APP = test_sqlalchemy.pyFLASK_ENV = developmentFLASK_DEBUG = 1In folder /Users/autotest/PycharmProjects/python3_flask/Users/autotest/PycharmProjects/python3_flask/venv/bin/python /Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py --module --multiproc --qt-support=auto --client 127.0.0.1 --port 55365 --file flask runpydev debugger: process 3089 is connectingConnected to pydev debugger (build 182.4505.26)* Serving Flask app "test_sqlalchemy.py" (lazy loading)* Environment: development* Debug mode: onmysql+pymysql://root:1q2w3e4r5t@127.0.0.1:3306/db_demo1?charset=utf8* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)* Restarting with statpydev debugger: process 3090 is connectingmysql+pymysql://root:1q2w3e4r5t@127.0.0.1:3306/db_demo1?charset=utf8* Debugger is active!* Debugger PIN: 216-502-598python3 + flask + sqlalchemy +orm(2):数据库中添加表往数据库中添加一张保存文章的表,表明为article,字段有id,title,content同样一个配置文件:config.pyDEBUG = True #dialect+driver://root:1q2w3e4r5t@127.0.0.1:3306/ DIALECT = 'mysql' DRIVER='pymysql' USERNAME = 'root' PASSWORD = '1q2w3e4r5t' HOST = '127.0.0.1' PORT = 3306 DATABASE = 'db_demo1' SQLALCHEMY_DATABASE_URI = "{}+{}://{}:{}@{}:{}/{}?charset=utf8".format(DIALECT,DRIVER,USERNAME,PASSWORD,HOST,PORT,DATABASE) SQLALCHEMY_TRACK_MODIFICATIONS = False print(SQLALCHEMY_DATABASE_URI)flask app 中新建一个class Blog,里面定义好id ,title ,content。代码执行到db.create_all()时,会自动在数据库中创建一个表,表明为blogfrom flask import Flask import config from flask_sqlalchemy import SQLAlchemy from sqlalchemy.ext.declarative import declarative_base app = Flask(__name__) app.config.from_object(config) db = SQLAlchemy(app) Base = declarative_base() class Blog(db.Model): __tablename__ = 'blog' id = db.Column(db.Integer,primary_key=True,autoincrement=True) title = db.Column(db.String(100),nullable=False) content = db.Column(db.Text,nullable=True) db.create_all() @app.route('/') def index(): return 'index' if __name__ == '__main__': app.run(debug=True)启动flask app,数据库中查询表和表结构如下,有新增相应的表,说明新建表成功 数据库表中的数据增删改查 #新增 blog = Blog(title="first blog",content="this is my first blog") db.session.add(blog) db.session.commit() #查询 #res =Blog.query.filter(Blog.title=="first blog")[0] res =Blog.query.filter(Blog.title=="first blog").first() print(res.title) #修改 blog_edit = Blog.query.filter(Blog.title=="first blog").first() blog_edit.title = "new first blog" db.session.commit() #删除 blog_delete = Blog.query.filter(Blog.title=="first blog").first() db.session.delete(blog_delete) db.session.commit()完整代码from flask import Flask import config from flask_sqlalchemy import SQLAlchemy from sqlalchemy.ext.declarative import declarative_base app = Flask(__name__) app.config.from_object(config) db = SQLAlchemy(app) Base = declarative_base() class Blog(db.Model): __tablename__ = 'blog' id = db.Column(db.Integer,primary_key=True,autoincrement=True) title = db.Column(db.String(100),nullable=False) content = db.Column(db.Text,nullable=True) db.create_all() @app.route('/') def index(): #新增 blog = Blog(title="first blog",content="this is my first blog") db.session.add(blog) db.session.commit() #查询 #res =Blog.query.filter(Blog.title=="first blog")[0] res =Blog.query.filter(Blog.title=="first blog").first() print(res.title) #修改 blog_edit = Blog.query.filter(Blog.title=="first blog").first() blog_edit.title = "new first blog" db.session.commit() #删除 blog_delete = Blog.query.filter(Blog.title=="first blog").first() db.session.delete(blog_delete) db.session.commit() return 'index' if __name__ == '__main__': app.run(debug=True)一篇文章有多个tag,一个tag也可以属于多篇文章,文章和tag存在多对多关系config.pyDEBUG = True #dialect+driver://root:1q2w3e4r5t@127.0.0.1:3306/ DIALECT = 'mysql' DRIVER='pymysql' USERNAME = 'demo_user' PASSWORD = 'demo_123' HOST = '172.16.10.6' PORT = 3306 DATABASE = 'db_demo1' SQLALCHEMY_DATABASE_URI = "{}+{}://{}:{}@{}:{}/{}?charset=utf8".format(DIALECT,DRIVER,USERNAME,PASSWORD,HOST,PORT,DATABASE) SQLALCHEMY_TRACK_MODIFICATIONS = False print(SQLALCHEMY_DATABASE_URI)app.pyfrom flask import Flask import config from flask_sqlalchemy import SQLAlchemy from sqlalchemy.ext.declarative import declarative_base app = Flask(__name__) app.config.from_object(config) db = SQLAlchemy(app) Base = declarative_base() article_tag = db.Table('article_tag', db.Column('article_id',db.Integer,db.ForeignKey("article.id"),primary_key=True), db.Column('tag_id',db.Integer,db.ForeignKey("tag.id"),primary_key=True) ) class Article(db.Model): __tablename__='article' id = db.Column(db.Integer,primary_key=True,autoincrement=True) title = db.Column(db.String(100),nullable=True) tags = db.relationship('Tag',secondary=article_tag,backref=db.backref('articles')) class Tag(db.Model): __tablename__='tag' id = db.Column(db.Integer,primary_key=True,autoincrement=True) name = db.Column(db.String(100),nullable=True) db.create_all() @app.route('/') def index(): article1 = Article(title="aaa") article2 = Article(title="bbb") tag1 = Tag(name='1111') tag2 = Tag(name='2222') article1.tags.append(tag1) article1.tags.append(tag2) article2.tags.append(tag1) article2.tags.append(tag2) db.session.add(article1) db.session.add(article2) db.session.add(tag1) db.session.add(tag2) db.session.commit() return 'index' if __name__ == '__main__': app.run(debug=True)
2020年03月08日
864 阅读
0 评论
0 点赞
2020-03-06
利用Scrapy框架爬取LOL皮肤站高清壁纸
成品打包:点击进入代码:爬虫文件# -*- coding: utf-8 -*- import scrapy from practice.items import PracticeItem from urllib import parse class LolskinSpider(scrapy.Spider): name = 'lolskin' allowed_domains = ['lolskin.cn'] start_urls = ['https://lolskin.cn/champions.html'] # 获取所有英雄链接 def parse(self, response): item = PracticeItem() item['urls'] = response.xpath('//div[2]/div[1]/div/ul/li/a/@href').extract() for url in item['urls']: self.csurl = 'https://lolskin.cn' yield scrapy.Request(url=parse.urljoin(self.csurl, url), dont_filter=True, callback=self.bizhi) return item # 获取所有英雄皮肤链接 def bizhi(self, response): skins = (response.xpath('//td/a/@href').extract()) for skin in skins: yield scrapy.Request(url=parse.urljoin(self.csurl, skin), dont_filter=True, callback=self.get_bzurl) # 采集每个皮肤的壁纸,获取壁纸链接 def get_bzurl(self, response): item = PracticeItem() image_urls = response.xpath('//body/div[1]/div/a/@href').extract() image_name = response.xpath('//h1/text()').extract() yield { 'image_urls': image_urls, 'image_name': image_name } return itemitems.py# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class PracticeItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() # titles = scrapy.Field() # yxpngs = scrapy.Field() urls = scrapy.Field() skin_name = scrapy.Field() # 皮肤名 image_urls = scrapy.Field() # 皮肤壁纸url images = scrapy.Field()pipelines.py# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import os import re from scrapy.pipelines.images import ImagesPipeline import scrapy # class PracticePipeline(object): # def __init__(self): # self.file = open('text.csv', 'a+') # # def process_item(self, item, spider): # # os.chdir('lolskin') # # for title in item['titles']: # # os.makedirs(title) # skin_name = item['skin_name'] # skin_jpg = item['skin_jpg'] # for i in range(len(skin_name)): # self.file.write(f'{skin_name[i]},{skin_jpg} ') # self.file.flush() # return item # # def down_bizhi(self, item, spider): # self.file.close() class LoLPipeline(ImagesPipeline): def get_media_requests(self, item, info): for image_url in item['image_urls']: yield scrapy.Request(image_url, meta={'image_name': item['image_name']}) # 修改下载之后的路径以及文件名 def file_path(self, request, response=None, info=None): image_name = re.findall('/skin/(.*?)/', request.url)[0] + "/" + request.meta[f'image_name'][0] + '.jpg' return image_namesettings.py# -*- coding: utf-8 -*- # Scrapy settings for practice project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html import os BOT_NAME = 'practice' SPIDER_MODULES = ['practice.spiders'] NEWSPIDER_MODULE = 'practice.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent # USER_AGENT = 'practice (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) # CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 设置延时 DOWNLOAD_DELAY = 1 # The download delay setting will honor only one of: # CONCURRENT_REQUESTS_PER_DOMAIN = 16 # CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) # COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) # TELNETCONSOLE_ENABLED = False # Override the default request headers: # DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', # } # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html # SPIDER_MIDDLEWARES = { # 'practice.middlewares.PracticeSpiderMiddleware': 543, # } # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = { # 'practice.middlewares.PracticeDownloaderMiddleware': 543, # } # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html # EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, # } # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { # 'practice.pipelines.PracticePipeline': 300, # 'scrapy.pipelines.images.ImagesPipeline': 1, 'practice.pipelines.LoLPipeline': 1 } # 设置采集文件夹路径 IMAGES_STORE = 'E:PythonscrapypracticepracticeLOLskin' # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html # AUTOTHROTTLE_ENABLED = True # The initial download delay # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: # AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings # HTTPCACHE_ENABLED = True # HTTPCACHE_EXPIRATION_SECS = 0 # HTTPCACHE_DIR = 'httpcache' # HTTPCACHE_IGNORE_HTTP_CODES = [] # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'main.pyfrom scrapy.cmdline import execute execute(['scrapy', 'crawl', 'lolskin'])
2020年03月06日
636 阅读
0 评论
0 点赞
1
...
61
62
63
...
65